TartanGround is a large-scale, multi-modal dataset to advance the perception and autonomy of ground robots operating in diverse environments. This dataset, collected in various photorealistic simulation environments includes multiple RGB stereo cameras for 360-degree coverage, along with depth, optical flow, stereo disparity, LiDAR point clouds, ground truth poses, semantic segmented images, and occupancy maps with semantic labels. Data is collected using an integrated automatic pipeline, which generates trajectories mimicking the motion patterns of various ground robot platforms, including wheeled and legged robots. They collect 878 trajectories across 63 environments, resulting in 1.44 million samples.
@article{patel2025tartanground,
title={TartanGround: A Large-Scale Dataset for Ground Robot Perception and Navigation},
author={Patel, Manthan and Yang, Fan and Qiu, Yuheng and Cadena, Cesar and Scherer, Sebastian and Hutter, Marco and Wang, Wenshan},
journal={arXiv preprint arXiv:2505.10696},
year={2025}}