Submission name | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Submission time (UTC) | Oct. 14, 2022, 6:38 a.m. | ||||||||||
User | zyMeteroid | ||||||||||
Task | Model-based 6D localization of seen objects | ||||||||||
Dataset | T-LESS | ||||||||||
Training model type | CAD | ||||||||||
Training image type | Synthetic + real | ||||||||||
Description | |||||||||||
Evaluation scores |
|
User | zyMeteroid |
---|---|
Publication | not yet |
Implementation | Pytorch, code can be found at https://github.com/shanice-l/gdrnpp_bop2022 |
Training image modalities | RGB |
Test image modalities | RGB-D |
Description | Submitted to: BOP Challenge 2023 Training data: real + provided PBR Used 3D models: Reconstructed for T-LESS, default for other datasets Notes: Authors: Xingyu Liu, Ruida Zhang, Chenyangguang Zhang, Bowen Fu, Jiwen Tang, Xiquan Liang, Jingyi Tang, Xiaotian Cheng, Yukang Zhang, Gu Wang, and Xiangyang Ji (Tsinghua University). Based on GDRNPP_PBR_RGB_MModel, we utilize depth information to further refine the estimated pose. In order to fulfil the real-time application requirements, we implement a fast refinement module. We compare the rendered object depth and the observed depth to refine translation. |
Computer specifications | GPU RTX 3090; CPU AMD EPYC 7H12 64-Core Processor. |