Submission name | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Submission time (UTC) | Oct. 8, 2022, 10:57 a.m. | ||||||||||
User | Yang-hai | ||||||||||
Task | Model-based 6D localization of seen objects | ||||||||||
Dataset | HB | ||||||||||
Training model type | Default | ||||||||||
Training image type | Synthetic (only PBR images provided for BOP Challenge 2020 were used) | ||||||||||
Description | |||||||||||
Evaluation scores |
|
User | Yang-hai |
---|---|
Publication | Yang Hai et, al; Rigidity-Aware Detection for 6D Object Pose Estimation; Yinlin Hu et, at: Perspective Flow Aggregation for Data-Limited 6D Object Pose Estimation, ECCV, 2022 |
Implementation | |
Training image modalities | RGB |
Test image modalities | RGB-D |
Description | We train a single model for all objects on each dataset, and based on an architecture of object detection and pose regression. Object detection: extended FCOS Pose regression: extended PFA-Pose Data: PBR RGBD track: the same models used in RGB Track, RANSAC-Kabsch for depth utilizing. The Main differences from FCOS:
The main differences from the original PFA-Pose paper:
List of contributors: Yang Hai, Rui Song, Zhiqiang Liu, Jiaojiao Li (Xidian University) Mathieu Salzmann, Pascal Fua (EPFL) Yinlin Hu (Magic Leap) |
Computer specifications | NVIDIA 3090 |