Probabilistic Inference of Cosmological Density Parameters from Synthetic Hubble Expansion Data of Varying SNR Using Diverse Artificial Neural Network Architectures
Zijian Jin, Jaehyon Rhee
arXiv:2510.12865v1 Announce Type: new
Abstract: This paper builds upon ParamANN’s novel approach (S. Pal & R. Saha 2024) of using ANNs to infer cosmological density parameters by determining optimal architecture for varying synthetic Hubble data SNRs in estimating the density parameters $Omega_{m, 0}$ and $Omega_{Lambda, 0}$ across redshift values $z in [0, 1]$. To generate the synthetic data, this study randomly sampled initial free parameter values at $z=0$ from theoretically motivated priors and evolved them backwards using the first Friedmann Equation to generate clean $H(z)$ curves. Then, this paper adds realistic noise of high, normal, and low SNR by sampling relative uncertainties from a Gaussian KDE on 47 real data observations compiled by A. Bouali et al. (2023). In the end, this study found that a RNN that uses BiLSTM is the most effective for high and normal SNR data across four quantitative metrics. On the other hand, a combination of convolution and recurrent layers that uses GRU performed the best for low SNR data across the same four metrics. A comparison between the results of this paper’s ANN predictions and those of ParamANN shows that all architectures tested in this paper regardless of training SNR are statistically consistent within 1 standard deviation of ParamANN. However, most ANN results are not statistically consistent within 3 standard deviations of Planck Collaboration et al. (2020), showing a significant difference between ANN and the more traditional MCMC methods used by Planck collaboration.arXiv:2510.12865v1 Announce Type: new
Abstract: This paper builds upon ParamANN’s novel approach (S. Pal & R. Saha 2024) of using ANNs to infer cosmological density parameters by determining optimal architecture for varying synthetic Hubble data SNRs in estimating the density parameters $Omega_{m, 0}$ and $Omega_{Lambda, 0}$ across redshift values $z in [0, 1]$. To generate the synthetic data, this study randomly sampled initial free parameter values at $z=0$ from theoretically motivated priors and evolved them backwards using the first Friedmann Equation to generate clean $H(z)$ curves. Then, this paper adds realistic noise of high, normal, and low SNR by sampling relative uncertainties from a Gaussian KDE on 47 real data observations compiled by A. Bouali et al. (2023). In the end, this study found that a RNN that uses BiLSTM is the most effective for high and normal SNR data across four quantitative metrics. On the other hand, a combination of convolution and recurrent layers that uses GRU performed the best for low SNR data across the same four metrics. A comparison between the results of this paper’s ANN predictions and those of ParamANN shows that all architectures tested in this paper regardless of training SNR are statistically consistent within 1 standard deviation of ParamANN. However, most ANN results are not statistically consistent within 3 standard deviations of Planck Collaboration et al. (2020), showing a significant difference between ANN and the more traditional MCMC methods used by Planck collaboration.