Software & Datasets
SPICE-HL3: Single-Photon, Inertial, and Stereo Camera dataset for Exploration of High-Latitude Lunar Landscapes
Exploring high-latitude lunar regions presents an extremely challenging visual environment for robots. The low sunlight elevation angle and minimal light scattering result in a visual field dominated by a high dynamic range featuring long, dynamic shadows. Reproducing these conditions on Earth requires sophisticated simulators and specialized facilities. We introduce a unique dataset recorded at the LunaLab from the SnT - University of Luxembourg, an indoor test facility designed to replicate the optical characteristics of multiple lunar latitudes. Our dataset includes images, inertial measurements, and wheel odometry data from robots navigating seven distinct trajectories under multiple illumination scenarios, simulating high-latitude lunar conditions from dawn to night time with and without the aid of headlights, resulting in 88 distinct sequences containing a total of 1.3M images. Data was captured using a stereo RGB-inertial sensor, a monocular monochrome camera, and for the first time, a novel single-photon avalanche diode (SPAD) camera. We recorded both static and dynamic image sequences, with robots navigating at slow (5 cm/s) and fast (50 cm/s) speeds. All data is calibrated, synchronized, and timestamped, providing a valuable resource for validating perception tasks from vision-based autonomous navigation to scientific imaging for future lunar missions targeting high-latitude regions or those intended for robots operating across perceptually degraded environments.
Reference: Rodríguez-Martínez, D., van der Meer, D., Song, J., Bera, A., Pérez del Pulgar, C., & Olivares-Mendez, M. A. (2025). SPICE-HL3: Single-Photon, Inertial, and Stereo Camera dataset for Exploration of High-Latitude Lunar Landscapes, https://doi.org/10.48550/arXiv.2506.22956.
BASEPROD: The Bardenas Semi-Desert Planetary Rover Dataset
Dataset acquisitions devised specifically for robotic planetary exploration are key for the advancement, evaluation, and validation of novel perception, localization, and navigation methods in representative environments. Originating in the Bardenas semi-desert in July 2023, the data presented in this Data Descriptor is primarily aimed at Martian exploration and contains relevant rover sensor data from approximately 1.7km of traverses, a high-resolution 3D map of the test area, laser-induced breakdown spectroscopy recordings of rock samples along the rover path, as well as local weather data. In addition to optical cameras and inertial sensors, the rover features a thermal camera and six force-torque sensors. This setup enables, for example, the study of future localization, mapping, and navigation techniques in unstructured terrains for improved Guidance, Navigation, and Control (GNC). The main features of this dataset are the combination of scientific and engineering instrument data, as well as the inclusion of the thermal camera and force-torque sensors in particular.
Reference: Levin Gerdes, Tim Wiese, Raúl Castilla Arquillo, Laura Bielenberg, Martin Azkarate, Hugo Leblond, Felix Wilting, Joaquín Ortega Cortés, Alberto Bernal, Santiago Palanco & Carlos Pérez del Pulgar, BASEPROD: The Bardenas Semi-Desert Planetary Rover Dataset, Scientific Data, 11, 1054 (2024)
Hardware-Accelerated Mars Sample Localization Via Deep Transfer Learning From Photorealistic Simulations
The goal of the Mars Sample Return campaign is to collect soil samples from the surface of Mars and return them to Earth for further study. The samples will be acquired and stored in metal tubes by the Perseverance rover and deposited on the Martian surface. As part of this campaign, it is expected that the Sample Fetch Rover will be in charge of localizing and gathering up to 35 sample tubes over 150 Martian sols. Autonomous capabilities are critical for the success of the overall campaign and for the Sample Fetch Rover in particular. This work proposes a novel system architecture for the autonomous detection and pose estimation of the sample tubes. For the detection stage, a Deep Neural Network and transfer learning from a synthetic dataset are proposed. The dataset is created from photorealistic 3D simulations of Martian scenarios. Additionally, the sample tubes poses are estimated using Computer Vision techniques such as contour detection and line fitting on the detected area. Finally, laboratory tests of the Sample Localization procedure are performed using the ExoMars Testing Rover on a Mars-like testbed. These tests validate the proposed approach in different hardware architectures, providing promising results related to the sample detection and pose estimation.
Reference: Castilla-Arquillo, R., Pérez-del-Pulgar, C. J., Paz-Delgado, G. J., and Gerdes, L. Hardware-accelerated mars sample localizaiton via deep transfer learning from photorealistic simulations, IEEE Robotics & Automation Letters, 7 (4), 12555-12561 (2022)