|Submission time (UTC)||Oct. 14, 2022, 6:39 a.m.|
|Task||6D localization of seen objects|
|Training model type||Default|
|Training image type||Synthetic (only PBR images provided for BOP Challenge 2020 were used)|
|Implementation||Pytorch, code can be found at https://github.com/shanice-l/gdrnpp_bop2022|
|Training image modalities||RGB|
|Test image modalities||RGB-D|
Submitted to: BOP Challenge 2023
Training data: real + provided PBR
Used 3D models: Reconstructed for T-LESS, default for other datasets
Authors: Xingyu Liu, Ruida Zhang, Chenyangguang Zhang, Bowen Fu, Jiwen Tang, Xiquan Liang, Jingyi Tang, Xiaotian Cheng, Yukang Zhang, Gu Wang, and Xiangyang Ji (Tsinghua University).
Based on GDRNPP_PBR_RGB_MModel, we utilize depth information to further refine the estimated pose. In order to fulfil the real-time application requirements, we implement a fast refinement module. We compare the rendered object depth and the observed depth to refine translation.
|Computer specifications||GPU RTX 3090; CPU AMD EPYC 7H12 64-Core Processor.|