The realization of hand function reengineering using a manipulator is a research hotspot in the field of robotics. In this paper, we propose a multimodal perception and control method for a robotic hand to assist the disabled. The movement of the human hand can be divided into two parts: the coordination of the posture of the fingers, and the coordination of the timing of grasping and releasing objects. Therefore, we first used a pinhole camera to construct a visual device suitable for finger mounting, and preclassified the shape of the object based on YOLOv8; then, a filtering process using multi-frame synthesized point cloud data from miniature 2D Lidar, and DBSCAN algorithm clustering objects and the DTW algorithm, was proposed to further identify the cross-sectional shape and size of the grasped part of the object and realize control of the robot's grasping gesture; finally, a multimodal perception and control method for prosthetic hands was proposed. To control the grasping attitude, a fusion algorithm based on information of upper limb motion state, hand position, and lesser toe haptics was proposed to realize control of the robotic grasping process with a human in the ring. The device designed in this paper does not contact the human skin, does not produce discomfort, and the completion rate of the grasping process experiment reached 91.63%, which indicates that the proposed control method has feasibility and applicability.
Keywords: 2D Lidar; environmental perception; human–machine interaction; intention recognition; prosthetic hand.