Meta-Learning One-Class Classification with DeepSets: Application in the Milky Way. (arXiv:2007.04459v1 [cs.LG])

Meta-Learning One-Class Classification with DeepSets: Application in the Milky Way. (arXiv:2007.04459v1 [cs.LG])
<a href="http://arxiv.org/find/cs/1/au:+Oladosu_A/0/1/0/all/0/1">Ademola Oladosu</a>, <a href="http://arxiv.org/find/cs/1/au:+Xu_T/0/1/0/all/0/1">Tony Xu</a>, <a href="http://arxiv.org/find/cs/1/au:+Ekfeldt_P/0/1/0/all/0/1">Philip Ekfeldt</a>, <a href="http://arxiv.org/find/cs/1/au:+Kelly_B/0/1/0/all/0/1">Brian A. Kelly</a>, <a href="http://arxiv.org/find/cs/1/au:+Cranmer_M/0/1/0/all/0/1">Miles Cranmer</a>, <a href="http://arxiv.org/find/cs/1/au:+Ho_S/0/1/0/all/0/1">Shirley Ho</a>, <a href="http://arxiv.org/find/cs/1/au:+Price_Whelan_A/0/1/0/all/0/1">Adrian M. Price-Whelan</a>, <a href="http://arxiv.org/find/cs/1/au:+Contardo_G/0/1/0/all/0/1">Gabriella Contardo</a>

We explore in this paper the use of neural networks designed for point-clouds
and sets on a new meta-learning task. We present experiments on the
astronomical challenge of characterizing the stellar population of stellar
streams. Stellar streams are elongated structures of stars in the outskirts of
the Milky Way that form when a (small) galaxy breaks up under the Milky Way’s
gravitational force. We consider that we obtain, for each stream, a small
‘support set’ of stars that belongs to this stream. We aim to predict if the
other stars in that region of the sky are from that stream or not, similar to
one-class classification. Each “stream task” could also be transformed into a
binary classification problem in a highly imbalanced regime (or supervised
anomaly detection) by using the much bigger set of “other” stars and
considering them as noisy negative examples. We propose to study the problem in
the meta-learning regime: we expect that we can learn general information on
characterizing a stream’s stellar population by meta-learning across several
streams in a fully supervised regime, and transfer it to new streams using only
positive supervision. We present a novel use of Deep Sets, a model developed
for point-cloud and sets, trained in a meta-learning fully supervised regime,
and evaluated in a one-class classification setting. We compare it against
Random Forests (with and without self-labeling) in the classic setting of
binary classification, retrained for each task. We show that our method
outperforms the Random-Forests even though the Deep Sets is not retrained on
the new tasks, and accesses only a small part of the data compared to the
Random Forest. We also show that the model performs well on a real-life stream
when including additional fine-tuning.

We explore in this paper the use of neural networks designed for point-clouds
and sets on a new meta-learning task. We present experiments on the
astronomical challenge of characterizing the stellar population of stellar
streams. Stellar streams are elongated structures of stars in the outskirts of
the Milky Way that form when a (small) galaxy breaks up under the Milky Way’s
gravitational force. We consider that we obtain, for each stream, a small
‘support set’ of stars that belongs to this stream. We aim to predict if the
other stars in that region of the sky are from that stream or not, similar to
one-class classification. Each “stream task” could also be transformed into a
binary classification problem in a highly imbalanced regime (or supervised
anomaly detection) by using the much bigger set of “other” stars and
considering them as noisy negative examples. We propose to study the problem in
the meta-learning regime: we expect that we can learn general information on
characterizing a stream’s stellar population by meta-learning across several
streams in a fully supervised regime, and transfer it to new streams using only
positive supervision. We present a novel use of Deep Sets, a model developed
for point-cloud and sets, trained in a meta-learning fully supervised regime,
and evaluated in a one-class classification setting. We compare it against
Random Forests (with and without self-labeling) in the classic setting of
binary classification, retrained for each task. We show that our method
outperforms the Random-Forests even though the Deep Sets is not retrained on
the new tasks, and accesses only a small part of the data compared to the
Random Forest. We also show that the model performs well on a real-life stream
when including additional fine-tuning.

http://arxiv.org/icons/sfx.gif

Comments are closed.