Submission name | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Submission time (UTC) | Oct. 12, 2022, 3:10 a.m. | ||||||||||
User | zyMeteroid | ||||||||||
Task | Model-based 6D localization of seen objects | ||||||||||
Dataset | HB | ||||||||||
Training model type | Default | ||||||||||
Training image type | Synthetic (only PBR images provided for BOP Challenge 2020 were used) | ||||||||||
Description | GDRNPP_PBR_RGB_SModel_HB | ||||||||||
Evaluation scores |
|
User | zyMeteroid |
---|---|
Publication | Not yet |
Implementation | Pytorch, code can be found at https://github.com/shanice-l/gdrnpp_bop2022 |
Training image modalities | RGB |
Test image modalities | RGB |
Description | Authors: Xingyu Liu, Ruida Zhang, Chenyangguang Zhang, Bowen Fu, Jiwen Tang, Xiquan Liang, Jingyi Tang, Xiaotian Cheng, Yukang Zhang, Gu Wang, and Xiangyang Ji (Tsinghua University). In the PBRReal-RGB-SModel setting, for LMO, HB, ICBIN and ITODD datasets, we only use the provided synthetic training data (PBR) in training. While for YCBV, TUDL, TLESS, we use the provided real data and synthetic data (PBR) in training. For detection, we adopted yolox as the detection method. Otherwise, stronger data augmentation and ranger optimizer has been used. For pose estimation, the difference between our GDRNPP and the CVPR-version GDR-Net mainly includes:
|
Computer specifications | GPU RTX 3090; CPU AMD EPYC 7H12 64-Core Processor. |