The Rubin Observatory’s Legacy Survey of Space and Time DP0.2 processing campaign at CC-IN2P3
Quentin Le Boulc’h, Fabio Hernandez, Gabriele Mainetti
arXiv:2404.06234v1 Announce Type: new
Abstract: The Vera C. Rubin Observatory, currently in construction in Chile, will start performing the Legacy Survey of Space and Time (LSST) in 2025 for 10 years. Its 8.4-meter telescope will survey the southern sky in less than 4 nights in six optical bands, and repeatedly generate about 2 000 exposures per night, corresponding to a data volume of about 20 TiB every night. Three data facilities are preparing to contribute to the production of the annual data releases: the US Data Facility will process 35% of the raw data, the UK data facility will process 25% of the raw data and the French data facility, operated by CC-IN2P3, will locally process the remaining 40% of the raw data. In the context of the Data Preview 0.2 (DP0.2), the Data Release Production pipelines have been executed on the DC-2 simulated dataset (generated by the Dark Energy Science Collaboration, DESC). This dataset includes 20 000 simulated exposures, representing 300 square degrees of Rubin images with a typical depth of 5 years. DP0.2 ran at the Interim Data Facility (based on Google cloud), and the full exercise was independently replicated at CC-IN2P3. During this exercise, 3 PiB of data and more than 200 million files were produced. In this contribution we will present a detailed description of the system that we set up to perform this processing campaign using CC-IN2P3’s computing and storage infrastructure. Several topics will be addressed: workflow generation and execution, batch job submission, memory and I/O requirements, etc. We will focus on the issues that arose during this campaign and how we addressed them and will present some perspectives after this exercise.arXiv:2404.06234v1 Announce Type: new
Abstract: The Vera C. Rubin Observatory, currently in construction in Chile, will start performing the Legacy Survey of Space and Time (LSST) in 2025 for 10 years. Its 8.4-meter telescope will survey the southern sky in less than 4 nights in six optical bands, and repeatedly generate about 2 000 exposures per night, corresponding to a data volume of about 20 TiB every night. Three data facilities are preparing to contribute to the production of the annual data releases: the US Data Facility will process 35% of the raw data, the UK data facility will process 25% of the raw data and the French data facility, operated by CC-IN2P3, will locally process the remaining 40% of the raw data. In the context of the Data Preview 0.2 (DP0.2), the Data Release Production pipelines have been executed on the DC-2 simulated dataset (generated by the Dark Energy Science Collaboration, DESC). This dataset includes 20 000 simulated exposures, representing 300 square degrees of Rubin images with a typical depth of 5 years. DP0.2 ran at the Interim Data Facility (based on Google cloud), and the full exercise was independently replicated at CC-IN2P3. During this exercise, 3 PiB of data and more than 200 million files were produced. In this contribution we will present a detailed description of the system that we set up to perform this processing campaign using CC-IN2P3’s computing and storage infrastructure. Several topics will be addressed: workflow generation and execution, batch job submission, memory and I/O requirements, etc. We will focus on the issues that arose during this campaign and how we addressed them and will present some perspectives after this exercise.