Learning Gentle Grasping Using Vision, Sound, and Touch
Type of the data | Dataset | |
Total size of the dataset | 4784971381 | |
Author | Nakahara, Ken | |
Author | Calandra, Roberto | |
Upload date | 2025-03-11T07:45:20Z | |
Publication date | 2025-03-11T07:45:20Z | |
Data of data creation | 2024 | |
Publication date | 2025-03-11 | |
Abstract of the dataset | This dataset contains 1,500 robotic grasps collected for the paper of Learning Gentle Grasping Using Vision, Sound, and Touch. Additionally, we provide a description of this dataset and Python scripts to visualize the data and process raw data into a training dataset for a PyTorch model. The robotics system used consists of a multi-fingered robotic hand (16-DoF, Allegro Hand v4.0), 7-DoF robotic arms (xArm7), DIGIT tactile sensors, an RGB-D camera (Intel RealSense D435i), and a commodity microphone. The target object is a toy that emits sound when grasped strongly. | |
Public reference to this page | https://opara.zih.tu-dresden.de/handle/123456789/1361 | |
Public reference to this page | https://doi.org/10.25532/OPARA-787 | |
Publisher | Technische Universität Dresden | |
Licence | Attribution-NonCommercial-NoDerivatives 4.0 International | en |
URI of the licence text | http://creativecommons.org/licenses/by-nc-nd/4.0/ | |
Specification of the discipline(s) | 4::44::409::409-05 | |
Title of the dataset | Learning Gentle Grasping Using Vision, Sound, and Touch | |
Software | Python | |
Software | Python | |
Project abstract | In our daily life, we often encounter objects that are fragile and can be damaged by excessive grasping force, such as fruits. For these objects, it is paramount to grasp gently – not using the maximum amount of force possible, but rather the minimum amount of force necessary. This paper proposes using visual, tactile, and auditory signals to learn to grasp and regrasp fragile objects stably and gently. Specifically, we use audio signals as an indicator of gentleness during the grasping, and then train end-to-end an action-conditional model from raw visuo-tactile inputs that predicts both the stability and the gentleness of future grasping candidates, thus allowing the selection and execution of the most promising action. Experimental results on a multi-fingered hand over 1,500 grasping trials demonstrated that our model is useful for gentle grasping by validating the predictive performance (3.27% higher accuracy than the vision-only variant) and providing interpretations of their behavior. Finally, real-world experiments confirmed that the grasping performance with the trained multi-modal model outperformed other baselines (17% higher rate for stable and gentle grasps than vision-only). Our approach requires neither tactile sensor calibration nor analytical force modeling, drastically reducing the engineering effort to grasp fragile objects. | |
Public project website(s) | https://lasr.org/research/gentle-grasping | |
Project title | Learning Gentle Grasping Using Vision, Sound, and Touch |
Files
License bundle
- Name:
- license.txt
- Size:
- 4.66 KB
- Format:
- Item-specific license agreed to upon submission
- Description:
Collections
