The RoboTurk dataset is a large-scale, diverse dataset on three different real world tasks: Laundry Layout, Tower Creation, and Object Search. All three datasets were collected using the RoboTurk platform, collected by crowdsourced workers remotely. The dataset consists of 2144 different demonstrations from 54 unique users.
Notably, RoboTurk include tasks with complex 3D motions that can be utilized for similar 3D manipulation tasks. Furthermore, the tasks are long-horizon, so it is important for prediction models to be able to reason about different parts of the task, given some context or history. In addition, the dataset can be used for action-conditioned video prediction for model based predictive control1 or for action-free video prediction.
To collect task demonstrations, users connect to the RoboTurk platform from remote locations using a web browser and use their smartphone as a motion controller to move the physical robot arm in free space. Users are provided a video stream of the robot workspace in their web browser.
@inproceedings{mandlekar2019scaling,
title={Scaling robot supervision to hundreds of hours with roboturk: Robotic manipulation dataset through human reasoning and dexterity},
author={Mandlekar, Ajay and Booher, Jonathan and Spero, Max and Tung, Albert and Gupta, Anchit and Zhu, Yuke and Garg, Animesh and Savarese, Silvio and Fei-Fei, Li},
booktitle={2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
pages={1048--1055},
year={2019},
organization={IEEE}
}