Submission: PVNet-CVPR19+ICP/LM-O

Download submission
Submission name
Submission time (UTC) Aug. 11, 2020, 5:42 a.m.
User haotonglin
Task 6D localization of seen objects
Dataset LM-O
Training model type Default
Training image type Synthetic (only PBR images provided for BOP Challenge 2020 were used)
Description It takes about 4 hours to train a CenterNet and 8 hours to train a PVNet for one object. PBR training images improves the performance compared to the synthetic data using original blender in our paper.
Evaluation scores
AR:0.638
AR_MSPD:0.730
AR_MSSD:0.683
AR_VSD:0.502
average_time_per_image:-1.000

Method: PVNet-CVPR19+ICP

User haotonglin
Publication Sida Peng et al: PVNet: Pixel-wise Voting Network for 6DoF Pose Estimation, CVPR 2019
Implementation https://github.com/zju3dv/clean-pvnet
Training image modalities RGB
Test image modalities RGB-D
Description

PVNet + ICP

Methods Overview

We first use an object detector, Centernet, to detect objects in the image. Given the detected object bounding box, we crop the image and use PVNet to detect 2D object keypoints, which is used to compute 6D pose through the PnP algorithm. Then we use ICP to refine the estimated pose.

We train one network per object for both CenterNet and PVNet only using the PBR synthetic data provided by bop challenge 2020 .

Differences between evaluated method and the linked publication

  1. We use a 2d detector to help PVNet handle multiple instances.
  2. We use offset field instead of vector field.
  3. We use only the PBR synthetic data provided by bop challenge 2020 to train our model.
  4. We use ICP algorithm to refine the estimated 6D poses.
Computer specifications Evaluated on AMD Ryzen 7 3800X 8-Core Processor and GTX 1660 Super