Euclid Preparation TBD. Characterization of convolutional neural networks for the identification of galaxy-galaxy strong lensing events. (arXiv:2307.08736v1 [astro-ph.GA])
<a href="http://arxiv.org/find/astro-ph/1/au:+Collaboration_Euclid/0/1/0/all/0/1">Euclid Collaboration</a>: <a href="http://arxiv.org/find/astro-ph/1/au:+Leuzzi_L/0/1/0/all/0/1">L. Leuzzi</a> (1 and 2), <a href="http://arxiv.org/find/astro-ph/1/au:+Meneghetti_M/0/1/0/all/0/1">M. Meneghetti</a> (2 and 3), <a href="http://arxiv.org/find/astro-ph/1/au:+Angora_G/0/1/0/all/0/1">G. Angora</a> (4 and 5), <a href="http://arxiv.org/find/astro-ph/1/au:+Metcalf_R/0/1/0/all/0/1">R. B. Metcalf</a> (1), <a href="http://arxiv.org/find/astro-ph/1/au:+Moscardini_L/0/1/0/all/0/1">L. Moscardini</a> (1 and 2 and 3), <a href="http://arxiv.org/find/astro-ph/1/au:+Rosati_P/0/1/0/all/0/1">P. Rosati</a> (4 and 2), <a href="http://arxiv.org/find/astro-ph/1/au:+Bergamini_P/0/1/0/all/0/1">P. Bergamini</a> (6 and 2), <a href="http://arxiv.org/find/astro-ph/1/au:+Calura_F/0/1/0/all/0/1">F. Calura</a> (2), <a href="http://arxiv.org/find/astro-ph/1/au:+Clement_B/0/1/0/all/0/1">B. Cl&#xe9;ment</a> (7), <a href="http://arxiv.org/find/astro-ph/1/au:+Gavazzi_R/0/1/0/all/0/1">R. Gavazzi</a> (8 and 9), <a href="http://arxiv.org/find/astro-ph/1/au:+Gentile_F/0/1/0/all/0/1">F. Gentile</a> (10 and 2), <a href="http://arxiv.org/find/astro-ph/1/au:+Lochner_M/0/1/0/all/0/1">M. Lochner</a> (11 and 12), <a href="http://arxiv.org/find/astro-ph/1/au:+Grillo_C/0/1/0/all/0/1">C. Grillo</a> (6 and 13), <a href="http://arxiv.org/find/astro-ph/1/au:+Vernardos_G/0/1/0/all/0/1">G. Vernardos</a> (14), <a href="http://arxiv.org/find/astro-ph/1/au:+Aghanim_N/0/1/0/all/0/1">N. Aghanim</a> (15), <a href="http://arxiv.org/find/astro-ph/1/au:+Amara_A/0/1/0/all/0/1">A. Amara</a> (16), <a href="http://arxiv.org/find/astro-ph/1/au:+Amendola_L/0/1/0/all/0/1">L. Amendola</a> (17), <a href="http://arxiv.org/find/astro-ph/1/au:+Andreon_S/0/1/0/all/0/1">S. Andreon</a> (18), <a href="http://arxiv.org/find/astro-ph/1/au:+Auricchio_N/0/1/0/all/0/1">N. Auricchio</a> (2), <a href="http://arxiv.org/find/astro-ph/1/au:+Bardelli_S/0/1/0/all/0/1">S. Bardelli</a> (2), <a href="http://arxiv.org/find/astro-ph/1/au:+Bodendorf_C/0/1/0/all/0/1">C. Bodendorf</a> (19), <a href="http://arxiv.org/find/astro-ph/1/au:+Bonino_D/0/1/0/all/0/1">D. Bonino</a> (20), <a href="http://arxiv.org/find/astro-ph/1/au:+Branchini_E/0/1/0/all/0/1">E. Branchini</a> (21 and 22), <a href="http://arxiv.org/find/astro-ph/1/au:+Brescia_M/0/1/0/all/0/1">M. Brescia</a> (23 and 5), <a href="http://arxiv.org/find/astro-ph/1/au:+Brinchmann_J/0/1/0/all/0/1">J. Brinchmann</a> (24), <a href="http://arxiv.org/find/astro-ph/1/au:+Camera_S/0/1/0/all/0/1">S. Camera</a> (25 and 26 and 20), <a href="http://arxiv.org/find/astro-ph/1/au:+Capobianco_V/0/1/0/all/0/1">V. Capobianco</a> (20), <a href="http://arxiv.org/find/astro-ph/1/au:+Carbone_C/0/1/0/all/0/1">C. Carbone</a> (13), <a href="http://arxiv.org/find/astro-ph/1/au:+Carretero_J/0/1/0/all/0/1">J. Carretero</a> (27 and 28), <a href="http://arxiv.org/find/astro-ph/1/au:+Casas_S/0/1/0/all/0/1">S. Casas</a> (29), <a href="http://arxiv.org/find/astro-ph/1/au:+Castellano_M/0/1/0/all/0/1">M. Castellano</a> (30), <a href="http://arxiv.org/find/astro-ph/1/au:+Cavuoti_S/0/1/0/all/0/1">S. Cavuoti</a> (5 and 31), <a href="http://arxiv.org/find/astro-ph/1/au:+Cimatti_A/0/1/0/all/0/1">A. Cimatti</a> (32), <a href="http://arxiv.org/find/astro-ph/1/au:+Cledassou_R/0/1/0/all/0/1">R. Cledassou</a> (33 and 34), <a href="http://arxiv.org/find/astro-ph/1/au:+Congedo_G/0/1/0/all/0/1">G. Congedo</a> (35), <a href="http://arxiv.org/find/astro-ph/1/au:+Conselice_C/0/1/0/all/0/1">C. J. Conselice</a> (36), <a href="http://arxiv.org/find/astro-ph/1/au:+Conversi_L/0/1/0/all/0/1">L. Conversi</a> (37 and 38), <a href="http://arxiv.org/find/astro-ph/1/au:+Copin_Y/0/1/0/all/0/1">Y. Copin</a> (39), et al. (180 additional authors not shown)

Forthcoming imaging surveys will potentially increase the number of known
galaxy-scale strong lenses by several orders of magnitude. For this to happen,
images of tens of millions of galaxies will have to be inspected to identify
potential candidates. In this context, deep learning techniques are
particularly suitable for the finding patterns in large data sets, and
convolutional neural networks (CNNs) in particular can efficiently process
large volumes of images. We assess and compare the performance of three network
architectures in the classification of strong lensing systems on the basis of
their morphological characteristics. We train and test our models on different
subsamples of a data set of forty thousand mock images, having characteristics
similar to those expected in the wide survey planned with the ESA mission
Euclid, gradually including larger fractions of faint lenses. We also evaluate
the importance of adding information about the colour difference between the
lens and source galaxies by repeating the same training on single-band and
multi-band images. Our models find samples of clear lenses with $gtrsim 90%$
precision and completeness, without significant differences in the performance
of the three architectures. Nevertheless, when including lenses with fainter
arcs in the training set, the three models’ performance deteriorates with
accuracy values of $sim 0.87$ to $sim 0.75$ depending on the model. Our
analysis confirms the potential of the application of CNNs to the
identification of galaxy-scale strong lenses. We suggest that specific training
with separate classes of lenses might be needed for detecting the faint lenses
since the addition of the colour information does not yield a significant
improvement in the current analysis, with the accuracy ranging from $sim 0.89$
to $sim 0.78$ for the different models.

Forthcoming imaging surveys will potentially increase the number of known
galaxy-scale strong lenses by several orders of magnitude. For this to happen,
images of tens of millions of galaxies will have to be inspected to identify
potential candidates. In this context, deep learning techniques are
particularly suitable for the finding patterns in large data sets, and
convolutional neural networks (CNNs) in particular can efficiently process
large volumes of images. We assess and compare the performance of three network
architectures in the classification of strong lensing systems on the basis of
their morphological characteristics. We train and test our models on different
subsamples of a data set of forty thousand mock images, having characteristics
similar to those expected in the wide survey planned with the ESA mission
Euclid, gradually including larger fractions of faint lenses. We also evaluate
the importance of adding information about the colour difference between the
lens and source galaxies by repeating the same training on single-band and
multi-band images. Our models find samples of clear lenses with $gtrsim 90%$
precision and completeness, without significant differences in the performance
of the three architectures. Nevertheless, when including lenses with fainter
arcs in the training set, the three models’ performance deteriorates with
accuracy values of $sim 0.87$ to $sim 0.75$ depending on the model. Our
analysis confirms the potential of the application of CNNs to the
identification of galaxy-scale strong lenses. We suggest that specific training
with separate classes of lenses might be needed for detecting the faint lenses
since the addition of the colour information does not yield a significant
improvement in the current analysis, with the accuracy ranging from $sim 0.89$
to $sim 0.78$ for the different models.

http://arxiv.org/icons/sfx.gif