The interfaces used to operate assistive robots typically employ fixed, predefined maps to associate interface-level commands to robot control commands. User-defined control maps instead consider an individual’s preferences and capabilities, moving away from a one-size-fits-all mapping paradigm. These datasets contain mixtures of raw, filtered, and synthetic user control signal data that can be used to learn personalized interface mappings for assistive devices.
The data is gathered from examples of human-issued interface commands and associated robot motion labels can be prone to data sparsity and inconsistencies between the human’s intended versus issued interface commands.
null