Overlap-aware segmentation for topological reconstruction of obscured objects
J. Schueler, H. M. Ara’ujo, S. N. Balashov, J. E. Borg, C. Brew, F. M. Brunbauer, C. Cazzaniga, A. Cottle, D. Edgeman, C. D. Frost, F. Garcia, D. Hunt, M. Kastriotou, P. Knights, H. Kraus, A. Lindote, M. Lisowska, D. Loomba, E. Lopez Asamar, P. A. Majewski, T. Marley, C. McCabe, L. Millins, R. Nandakumar, T. Neep, F. Neves, K. Nikolopoulos, E. Oliveri, A. Roy, T. J. Sumner, E. Tilly, W. Thompson, M. A. Vogiatzi
arXiv:2510.06194v1 Announce Type: cross
Abstract: The separation of overlapping objects presents a significant challenge in scientific imaging. While deep learning segmentation-regression algorithms can predict pixel-wise intensities, they typically treat all regions equally rather than prioritizing overlap regions where attribution is most ambiguous. Recent advances in instance segmentation show that weighting regions of pixel overlap in training can improve segmentation boundary predictions in regions of overlap, but this idea has not yet been extended to segmentation regression. We address this with Overlap-Aware Segmentation of ImageS (OASIS): a new segmentation-regression framework with a weighted loss function designed to prioritize regions of object-overlap during training, enabling extraction of pixel intensities and topological features from heavily obscured objects. We demonstrate OASIS in the context of the MIGDAL experiment, which aims to directly image the Migdal effect–a rare process where electron emission is induced by nuclear scattering–in a low-pressure optical time projection chamber. This setting poses an extreme test case, as the target for reconstruction is a faint electron recoil track which is often heavily-buried within the orders-of-magnitude brighter nuclear recoil track. Compared to unweighted training, OASIS improves median intensity reconstruction errors from -32% to -14% for low-energy electron tracks (4-5 keV) and improves topological intersection-over-union scores from 0.828 to 0.855. These performance gains demonstrate OASIS’s ability to recover obscured signals in overlap-dominated regions. The framework provides a generalizable methodology for scientific imaging where pixels represent physical quantities and overlap obscures features of interest. All code is openly available to facilitate cross-domain adoption.arXiv:2510.06194v1 Announce Type: cross
Abstract: The separation of overlapping objects presents a significant challenge in scientific imaging. While deep learning segmentation-regression algorithms can predict pixel-wise intensities, they typically treat all regions equally rather than prioritizing overlap regions where attribution is most ambiguous. Recent advances in instance segmentation show that weighting regions of pixel overlap in training can improve segmentation boundary predictions in regions of overlap, but this idea has not yet been extended to segmentation regression. We address this with Overlap-Aware Segmentation of ImageS (OASIS): a new segmentation-regression framework with a weighted loss function designed to prioritize regions of object-overlap during training, enabling extraction of pixel intensities and topological features from heavily obscured objects. We demonstrate OASIS in the context of the MIGDAL experiment, which aims to directly image the Migdal effect–a rare process where electron emission is induced by nuclear scattering–in a low-pressure optical time projection chamber. This setting poses an extreme test case, as the target for reconstruction is a faint electron recoil track which is often heavily-buried within the orders-of-magnitude brighter nuclear recoil track. Compared to unweighted training, OASIS improves median intensity reconstruction errors from -32% to -14% for low-energy electron tracks (4-5 keV) and improves topological intersection-over-union scores from 0.828 to 0.855. These performance gains demonstrate OASIS’s ability to recover obscured signals in overlap-dominated regions. The framework provides a generalizable methodology for scientific imaging where pixels represent physical quantities and overlap obscures features of interest. All code is openly available to facilitate cross-domain adoption.

Comments are closed, but trackbacks and pingbacks are open.