Submission name | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Submission time (UTC) | Oct. 6, 2022, 12:29 p.m. | ||||||||||
User | zyMeteroid | ||||||||||
Task | Model-based 6D localization of seen objects | ||||||||||
Dataset | LM-O | ||||||||||
Training model type | Default | ||||||||||
Training image type | Synthetic (only PBR images provided for BOP Challenge 2020 were used) | ||||||||||
Description | |||||||||||
Evaluation scores |
|
User | zyMeteroid |
---|---|
Publication | Not yet |
Implementation | Pytorch, code can be found at https://github.com/shanice-l/gdrnpp_bop2022 |
Training image modalities | RGB |
Test image modalities | RGB |
Description | GDRNPP for BOP2022 Authors: Xingyu Liu, Ruida Zhang, Chenyangguang Zhang, Bowen Fu, Jiwen Tang, Xiquan Liang, Jingyi Tang, Xiaotian Cheng, Yukang Zhang, Gu Wang, and Xiangyang Ji (Tsinghua University). In the PBR_RGB_MModel setting, all models are trained only using the provided PBR synthetic data. For detection, we adopted yolox as the detection method. Otherwise, stronger data augmentation and ranger optimizer has been used. For pose estimation, the difference between our GDRNPP and the CVPR-version GDR-Net mainly includes:
|
Computer specifications | GPU RTX 3090; CPU AMD EPYC 7H12 64-Core Processor. |