Submission: GDRNPP-PBRReal-RGBD-MModel-Fast/HB

Download submission
Submission name
Submission time (UTC) Oct. 14, 2022, 6:39 a.m.
User zyMeteroid
Task 6D localization of seen objects
Dataset HB
Training model type Default
Training image type Synthetic (only PBR images provided for BOP Challenge 2020 were used)
Evaluation scores

Method: GDRNPP-PBRReal-RGBD-MModel-Fast

User zyMeteroid
Publication not yet
Implementation Pytorch, code can be found at
Training image modalities RGB
Test image modalities RGB-D

Submitted to: BOP Challenge 2023

Training data: real + provided PBR

Used 3D models: Reconstructed for T-LESS, default for other datasets


Authors: Xingyu Liu, Ruida Zhang, Chenyangguang Zhang, Bowen Fu, Jiwen Tang, Xiquan Liang, Jingyi Tang, Xiaotian Cheng, Yukang Zhang, Gu Wang, and Xiangyang Ji (Tsinghua University).

Based on GDRNPP_PBR_RGB_MModel, we utilize depth information to further refine the estimated pose. In order to fulfil the real-time application requirements, we implement a fast refinement module. We compare the rendered object depth and the observed depth to refine translation.

Computer specifications GPU RTX 3090; CPU AMD EPYC 7H12 64-Core Processor.