Notes on the Practical Application of Nested Sampling: MultiNest, (Non)convergence, and Rectification
Alexander J. Dittmann
arXiv:2404.16928v1 Announce Type: new
Abstract: Nested sampling is a promising tool for Bayesian statistical analysis because it simultaneously performs parameter estimation and facilitates model comparison. MultiNest is one of the most popular nested sampling implementations, and has been applied to a wide variety of problems in the physical sciences. However, MultiNest results are frequently unreliable, and accompanying convergence tests are a necessary component of any analysis. Using simple, analytically tractable test problems, I illustrate how MultiNest (1) can produce systematically biased estimates of the Bayesian evidence, which are more significantly biased for problems of higher dimensionality; (2) can derive posterior estimates with errors on the order of $sim100%$; (3) is more likely to underestimate the width of a credible interval than to overestimate it – to a minor degree for smooth problems, but much more so when sampling noisy likelihoods. Nevertheless, I show how MultiNest can be used to jump-start Markov chain Monte Carlo sampling or more rigorous nested sampling techniques, potentially accelerating more trustworthy measurements of posterior distributions and Bayesian evidences, and overcoming the challenge of Markov chain Monte Carlo initialization.arXiv:2404.16928v1 Announce Type: new
Abstract: Nested sampling is a promising tool for Bayesian statistical analysis because it simultaneously performs parameter estimation and facilitates model comparison. MultiNest is one of the most popular nested sampling implementations, and has been applied to a wide variety of problems in the physical sciences. However, MultiNest results are frequently unreliable, and accompanying convergence tests are a necessary component of any analysis. Using simple, analytically tractable test problems, I illustrate how MultiNest (1) can produce systematically biased estimates of the Bayesian evidence, which are more significantly biased for problems of higher dimensionality; (2) can derive posterior estimates with errors on the order of $sim100%$; (3) is more likely to underestimate the width of a credible interval than to overestimate it – to a minor degree for smooth problems, but much more so when sampling noisy likelihoods. Nevertheless, I show how MultiNest can be used to jump-start Markov chain Monte Carlo sampling or more rigorous nested sampling techniques, potentially accelerating more trustworthy measurements of posterior distributions and Bayesian evidences, and overcoming the challenge of Markov chain Monte Carlo initialization.