Submission: GDRNPP-PBRReal-RGBD-MModel-Fast/TUD-L

Download submission
Submission name
Submission time (UTC) Oct. 14, 2022, 6:38 a.m.
User zyMeteroid
Task 6D localization of seen objects
Dataset TUD-L
Training model type Default
Training image type Synthetic + real
Description
Evaluation scores
AR:0.936
AR_MSPD:0.977
AR_MSSD:0.965
AR_VSD:0.866
average_time_per_image:0.125

Method: GDRNPP-PBRReal-RGBD-MModel-Fast

User zyMeteroid
Publication not yet
Implementation Pytorch, code can be found at https://github.com/shanice-l/gdrnpp_bop2022
Training image modalities RGB
Test image modalities RGB-D
Description

Submitted to: BOP Challenge 2023

Training data: real + provided PBR

Used 3D models: Reconstructed for T-LESS, default for other datasets

Notes:

Authors: Xingyu Liu, Ruida Zhang, Chenyangguang Zhang, Bowen Fu, Jiwen Tang, Xiquan Liang, Jingyi Tang, Xiaotian Cheng, Yukang Zhang, Gu Wang, and Xiangyang Ji (Tsinghua University).

Based on GDRNPP_PBR_RGB_MModel, we utilize depth information to further refine the estimated pose. In order to fulfil the real-time application requirements, we implement a fast refinement module. We compare the rendered object depth and the observed depth to refine translation.

Computer specifications GPU RTX 3090; CPU AMD EPYC 7H12 64-Core Processor.