Nakahara, KenCalandra, Roberto2025-03-112025-03-1120242025-03-11https://opara.zih.tu-dresden.de/handle/123456789/1361https://doi.org/10.25532/OPARA-787This dataset contains 1,500 robotic grasps collected for the paper of Learning Gentle Grasping Using Vision, Sound, and Touch. Additionally, we provide a description of this dataset and Python scripts to visualize the data and process raw data into a training dataset for a PyTorch model. The robotics system used consists of a multi-fingered robotic hand (16-DoF, Allegro Hand v4.0), 7-DoF robotic arms (xArm7), DIGIT tactile sensors, an RGB-D camera (Intel RealSense D435i), and a commodity microphone. The target object is a toy that emits sound when grasped strongly.Attribution-NonCommercial-NoDerivatives 4.0 Internationalhttp://creativecommons.org/licenses/by-nc-nd/4.0/4::44::409::409-05Learning Gentle Grasping Using Vision, Sound, and Touch