Cloud Identification from All-sky Camera Data with Machine Learning. (arXiv:2003.11109v1 [astro-ph.IM])
<a href="http://arxiv.org/find/astro-ph/1/au:+Mommert_M/0/1/0/all/0/1">Michael Mommert</a>

Most ground-based observatories are equipped with wide-angle all-sky cameras
to monitor the night sky conditions. Such camera systems can be used to provide
early warning of incoming clouds that can pose a danger to the telescope
equipment through precipitation, as well as for sky quality monitoring. We
investigate the use of different machine learning approaches for automating the
identification of mostly opaque clouds in all-sky camera data as a cloud
warning system. In a deep-learning approach, we train a Residual Neural Network
(ResNet) on pre-labeled camera images. Our second approach extracts relevant
and localized image features from camera images and uses these data to train a
gradient-boosted tree-based model (lightGBM). We train both model approaches on
a set of roughly 2,000 images taken by the all-sky camera located at Lowell
Observatory’s Discovery Channel Telescope, in which the presence of clouds has
been labeled manually. The ResNet approach reaches an accuracy of 85% in
detecting clouds in a given region of an image, but requires a significant
amount of computing resources. Our lightGBM approach achieves an accuracy of
95% with a training sample of ~1,000 images and rather modest computing
resources. Based on different performance metrics, we recommend the latter
feature-based approach for automated cloud detection. Code that was built for
this work is available online.

Most ground-based observatories are equipped with wide-angle all-sky cameras
to monitor the night sky conditions. Such camera systems can be used to provide
early warning of incoming clouds that can pose a danger to the telescope
equipment through precipitation, as well as for sky quality monitoring. We
investigate the use of different machine learning approaches for automating the
identification of mostly opaque clouds in all-sky camera data as a cloud
warning system. In a deep-learning approach, we train a Residual Neural Network
(ResNet) on pre-labeled camera images. Our second approach extracts relevant
and localized image features from camera images and uses these data to train a
gradient-boosted tree-based model (lightGBM). We train both model approaches on
a set of roughly 2,000 images taken by the all-sky camera located at Lowell
Observatory’s Discovery Channel Telescope, in which the presence of clouds has
been labeled manually. The ResNet approach reaches an accuracy of 85% in
detecting clouds in a given region of an image, but requires a significant
amount of computing resources. Our lightGBM approach achieves an accuracy of
95% with a training sample of ~1,000 images and rather modest computing
resources. Based on different performance metrics, we recommend the latter
feature-based approach for automated cloud detection. Code that was built for
this work is available online.

http://arxiv.org/icons/sfx.gif