Submission: PVNet-CVPR19/LM-O

Download submission
Submission name
Submission time (UTC) Aug. 11, 2020, 5:41 a.m.
User haotonglin
Task 6D localization of seen objects
Dataset LM-O
Training model type Default
Training image type Synthetic (only PBR images provided for BOP Challenge 2020 were used)
Description It takes about 4 hours to train a CenterNet and 8 hours to train a PVNet for one object. PBR training images improves the performance compared to the synthetic data using original blender in our paper.
Evaluation scores
AR:0.575
AR_MSPD:0.754
AR_MSSD:0.543
AR_VSD:0.428
average_time_per_image:-1.000

Method: PVNet-CVPR19

User haotonglin
Publication Sida Peng et al: PVNet: Pixel-wise Voting Network for 6DoF Pose Estimation, CVPR 2019
Implementation https://github.com/zju3dv/clean-pvnet
Training image modalities RGB
Test image modalities RGB
Description

PVNet

Methods Overview

We first use an object detector, Centernet, to detect objects in the image. Given the detected object bounding box, we crop the image and use PVNet to detect 2D object keypoints, which is used to compute 6D pose through the PnP algorithm.

We train one network per object for both CenterNet and PVNet only using the PBR synthetic data provided by bop challenge 2020 .

Differences between evaluated method and the linked publication

  1. We use a 2d detector to help PVNet handle multiple instances and reduce the domain gap between synthetic and real data.
  2. We use offset field instead of vector field.
  3. We use only the PBR synthetic data provided by bop challenge 2020 to train our model.
Computer specifications Evaluated on AMD Ryzen 7 3800X 8-Core Processor and GTX 1660 Super