DeformNet: Latent Space Modeling and Dynamics Prediction for Deformable Object Manipulation

C Li, Z Ai, T Wu, X Li, W Ding, H Xu - arXiv preprint arXiv:2402.07648, 2024 - arxiv.org
C Li, Z Ai, T Wu, X Li, W Ding, H Xu
arXiv preprint arXiv:2402.07648, 2024arxiv.org
Manipulating deformable objects is a ubiquitous task in household environments,
demanding adequate representation and accurate dynamics prediction due to the objects'
infinite degrees of freedom. This work proposes DeformNet, which utilizes latent space
modeling with a learned 3D representation model to tackle these challenges effectively. The
proposed representation model combines a PointNet encoder and a conditional neural
radiance field (NeRF), facilitating a thorough acquisition of object deformations and …
Manipulating deformable objects is a ubiquitous task in household environments, demanding adequate representation and accurate dynamics prediction due to the objects' infinite degrees of freedom. This work proposes DeformNet, which utilizes latent space modeling with a learned 3D representation model to tackle these challenges effectively. The proposed representation model combines a PointNet encoder and a conditional neural radiance field (NeRF), facilitating a thorough acquisition of object deformations and variations in lighting conditions. To model the complex dynamics, we employ a recurrent state-space model (RSSM) that accurately predicts the transformation of the latent representation over time. Extensive simulation experiments with diverse objectives demonstrate the generalization capabilities of DeformNet for various deformable object manipulation tasks, even in the presence of previously unseen goals. Finally, we deploy DeformNet on an actual UR5 robotic arm to demonstrate its capability in real-world scenarios.
arxiv.org