Three-dimensional point cloud recognition is a very fundamental work in fields such as autonomous driving and face recognition. However, in real industrial scenarios, input point cloud data are often accompanied by factors such as occlusion, rotation, and noise. These factors make it challenging to apply existing point cloud classification algorithms in real industrial scenarios. Currently, most studies enhance model robustness from the perspective of neural network structure. However, researchers have found that simply adjusting the neural network structure has proven insufficient in addressing the decline in accuracy caused by data corruption. In this article, we use local feature descriptors as a preprocessing method to extract features from point cloud data and propose a new neural network architecture aligned with these local features, effectively enhancing performance even in extreme cases of data corruption. In addition, we conducted data augmentation to the 10 intentionally selected categories in ModelNet40. Finally, we conducted multiple experiments, including testing the robustness of the model to occlusion and coordinate transformation and then comparing the model with existing SOTA models. Furthermore, in actual scene experiments, we used depth cameras to capture objects and input the obtained data into the established model. The experimental results show that our model outperforms existing popular algorithms when dealing with corrupted point cloud data. Even when the input point cloud data are affected by occlusion or coordinate transformation, our proposed model can maintain high accuracy. This suggests that our method can alleviate the problem of decreased model accuracy caused by the aforementioned factors.
Keywords: deep neural networks; local feature descriptor; object classification; partial point cloud; point cloud.