| Submission name | |||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| Submission time (UTC) | Oct. 14, 2022, 8:19 a.m. | ||||||||||
| User | zyMeteroid | ||||||||||
| Task | Model-based 6D localization of seen objects | ||||||||||
| Dataset | ITODD | ||||||||||
| Training model type | Default | ||||||||||
| Training image type | Synthetic (only PBR images provided for BOP Challenge 2020 were used) | ||||||||||
| Description | |||||||||||
| Evaluation scores |
|
| User | zyMeteroid |
|---|---|
| Publication | Not yet |
| Implementation | Pytorch, code can be found at https://github.com/shanice-l/gdrnpp_bop2022 |
| Training image modalities | RGB |
| Test image modalities | RGB-D |
| Description | Submitted to: BOP Challenge 2023 Training data: real + provided PBR Used 3D models: Reconstructed for T-LESS, default for other datasets Notes: Authors: Xingyu Liu, Ruida Zhang, Chenyangguang Zhang, Bowen Fu, Jiwen Tang, Xiquan Liang, Jingyi Tang, Xiaotian Cheng, Yukang Zhang, Gu Wang, and Xiangyang Ji. Based on GDRNPP-PBRReal-RGB-SModel, we utilize depth information to further refine the estimated pose. We implement a fast refinement module without learned parameters. We compare the rendered object depth and the observed depth to refine translation. |
| Computer specifications | GPU RTX 3090; CPU AMD EPYC 7H12 64-Core Processor. |