Submission name | coarse only | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Submission time (UTC) | Sept. 3, 2024, 12:26 p.m. | ||||||||||
User | sp9103 | ||||||||||
Task | Model-based 6D localization of unseen objects | ||||||||||
Dataset | HB | ||||||||||
Description | |||||||||||
Evaluation scores |
|
User | sp9103 |
---|---|
Publication | Not yet |
Implementation | - |
Training image modalities | RGB-D |
Test image modalities | RGB |
Description | Submitted to: BOP Challenge 2024 Training data: MegaPose-GSO and MegaPose-ShapeNetCore Onboarding data: 42 rendered templates Used 3D models: Default, CAD Notes: Our coarse estimator is based on local feature matching between the query image and multiple pre-rendered templates. We model the query and rendered images as aggregation of multiple patches. The coarse network finds the matchings between patch centers of input crop and rendered templates. From the 42 templates, the best template is selected and pose hypothesis is generated by RANSAC-PnP for RGB and MAGSAC++ [A] for RGB-D case. We use CroCo [B] pretraining for our coarse estimator. Note that the inputs to our neural networks are the rgb images only. The 2D detector used is specified in parentheses in the title, and it uses the FastSAM object proposals. [A] Barath et al.: MAGSAC++, a fast, reliable and accurate robust estimator, CVPR 2020 |
Computer specifications |