Submission: Co-op (CNOS, 5 Hypo, RGBD)/HB/default detection(CNOS)

Download submission
Submission name default detection(CNOS)
Submission time (UTC) Sept. 14, 2024, 11:05 a.m.
User sp9103
Task Model-based 6D localization of unseen objects
Dataset HB
Description
Evaluation scores
AR:0.865
AR_MSPD:0.875
AR_MSSD:0.870
AR_VSD:0.849
average_time_per_image:7.825

Method: Co-op (CNOS, 5 Hypo, RGBD)

User sp9103
Publication Not yet
Implementation -
Training image modalities RGB-D
Test image modalities RGB-D
Description

Submitted to: BOP Challenge 2024

Training data: MegaPose-GSO and MegaPose-ShapeNetCore

Onboarding data: 42 rendered templates

Used 3D models: Default, CAD

Notes:

Our method consists of 3 steps: coarse estimation, pose refinement, and optional pose selection. We train a model for each step and use the same models for all datasets. For each detection, we extract top-k hypotheses from our coarse network, and each hypothesis is refined using the refinement network. In the case of (k > 1), the refined hypotheses are scored using our pose selection network, and the best one is considered the output. k is specified in the title.

Our coarse estimator is based on local feature matching between the query image and multiple pre-rendered templates. We model the query and rendered images as aggregation of multiple patches. The coarse network finds the matchings between patch centers of input crop and rendered templates. From the 42 templates, top-k templates are selected and pose hypotheses are generated by RANSAC-PnP for RGB and MAGSAC++ [A] for RGB-D case.

The pose refiner is an optical flow based method similar to GenFlow [B], but we do not follow the RAFT structure for faster inference to bypass the inner loop calculation. We model the flow estimation as a probabilistic regression of a Laplace distribution.

We use CroCo [C] pretraining for our coarse estimator, pose refiner, and pose selection model. Note that the inputs to our neural networks are the rgb images only. In this submission, each coarse hypothesis is refined 5 times. The 2D detector used is specified in parentheses in the title, and it uses the FastSAM object proposals.

[A] Barath et al.: MAGSAC++, a fast, reliable and accurate robust estimator, CVPR 2020
[B] Moon et al.: GenFlow: Generalizable Recurrent Flow for 6D Pose Refinement of Novel Objects, CVPR 2024
[C] Weinzaepfel et al.: CroCo v2: Improved Cross-view Completion Pre-training for Stereo Matching and Optical Flow, ICCV 2023

Computer specifications