Submission name | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Submission time (UTC) | Aug. 19, 2020, 2:03 a.m. | ||||||||||
User | yann_labbe | ||||||||||
Task | Model-based 6D localization of seen objects | ||||||||||
Dataset | LM-O | ||||||||||
Training model type | Default | ||||||||||
Training image type | Synthetic (only PBR images provided for BOP Challenge 2020 were used) | ||||||||||
Description | |||||||||||
Evaluation scores |
|
User | yann_labbe |
---|---|
Publication | Labbé et al, CosyPose: Consistent multi-view multi-object 6D pose estimation, ECCV 2020 |
Implementation | https://github.com/ylabbe/cosypose |
Training image modalities | RGB |
Test image modalities | RGB-D |
Description | We apply ICP to each individual 6D pose estimates. The pose estimates are the ones from CosyPose-ECCV20-SYNT+REAL-1VIEW. We use parts of Pix2pose's ICP implementation https://github.com/kirumang/Pix2Pose/blob/master/tools/5_evaluation_bop_icp3d.py. MaskRCNN predicted masks are used to pick the 3D points that correspond to the detection in the depth. |
Computer specifications | CPU: 20-core Intel Xeon 6164 @ 3.2 GHz, GPU: Nvidia V100 |