Submission: PointVoteNet2/TUD-L/TUDL-PVN2-E6

Download submission
Submission name TUDL-PVN2-E6
Submission time (UTC) Aug. 19, 2020, 10:06 a.m.
User frederikhagelskjaer
Task Model-based 6D localization of seen objects
Dataset TUD-L
Training model type Default
Training image type Synthetic (only PBR images provided for BOP Challenge 2020 were used)
Description
Evaluation scores
AR:0.673
AR_MSPD:0.680
AR_MSSD:0.681
AR_VSD:0.659
average_time_per_image:-1.000

Method: PointVoteNet2

User frederikhagelskjaer
Publication PointVoteNet: Accurate Object Detection and 6 DoF Pose Estimation in Point Clouds
Implementation
Training image modalities RGB-D
Test image modalities RGB-D
Description

The method is a modification of the method described in the paper. A new addition is that background segmentation, and feature segmentation is split into two predictions. Two DGCNN kNN layers are also used instead of the original PointNet network. As in the original paper, for each point not predicted as background, the highest-scoring feature prediction is used; however, all feature predictions of 95% within the highest score are also allowed to accommodate symmetric objects. As in the publication, the detection and pose estimation is performed in 3D, while a final pose verification using 2D images is also used.

For training, only the provided synthetic BlenderProc4BOP images were used. Approximately one-fourth of the scenes are used for the training. The data is converted to Point Clouds, and spheres are extracted as in the publication. No further augmentations are performed besides random noise added during the training.

Each object takes two hours to train. As a result of time constraints, not all datasets were appropriately trained, and thus subpar performance was obtained.

Computer specifications The networks are trained on a Tesla P100-PCIE (Google Colab) and testing is performed on a GeForce RTX 2080 with an Intel(R) Core(TM) i9-9820X CPU @ 3.30GHz.