|Submission time (UTC)||Oct. 14, 2022, 12:40 p.m.|
|Task||2D detection of seen objects|
|Training model type||None|
|Training image type||Synthetic (only PBR images provided for BOP Challenge 2020 were used)|
|Implementation||Pytorch, code can be found at https://github.com/shanice-l/gdrnpp_bop2022|
|Training image modalities||RGB|
|Test image modalities||RGB|
GDRNPPDet for BOP2022
Authors: Xingyu Liu, Ruida Zhang, Chenyangguang Zhang, Bowen Fu, Jiwen Tang, Xiquan Liang, Jingyi Tang, Xiaotian Cheng, Yukang Zhang, Gu Wang, and Xiangyang Ji (Tsinghua University).
In the PBRReal setting, for LMO, HB, ICBIN and ITODD datasets, we only use the provided synthetic training data (PBR) in training. While for YCBV, TUDL, TLESS, we use the provided real data and synthetic data (PBR) in training. We trained one model for each dataset.
GDRNPPDet was based on YOLOX. We used stronger data augmentation and ranger optimizer.
|Computer specifications||GPU RTX 3090; CPU AMD EPYC 7H12 64-Core Processor.|