| Submission name | |||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| Submission time (UTC) | May 12, 2026, 4:47 p.m. | ||||||||||
| User | helose | ||||||||||
| Task | Model-based 6D localization of seen objects | ||||||||||
| Dataset | ITODD | ||||||||||
| Description | |||||||||||
| Evaluation scores |
|
| User | helose |
|---|---|
| Publication | RACE-6D: Real-time Accurate Coarse-to-finE object 6D Pose Transformer, CVPR 2026 Findings |
| Implementation | Pytorch, code can be found at https://github.com/Yoonwoo-Ha/RACE-6D |
| Training image modalities | RGB |
| Test image modalities | RGB |
| Description | Training data: real + provided PBR Used 3D models: Default for other datasets Authors: Yoonwoo Ha, Hyungpil Moon (SungKyunKwan University). For LMO, HB, ICBIN datasets, we only use the provided synthetic training data (PBR) in training. While for YCBV, TUDL, TLESS, we use the provided real data and synthetic data (PBR) in training. For detection, we developed a unified pose estimation model that encompasses the object detection process |
| Computer specifications | GPU RTX 3090; CPU intel i9-12900K |