Submission name | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Submission time (UTC) | Sept. 22, 2023, 4:07 p.m. | ||||||||||
User | agimus-happypose | ||||||||||
Task | Model-based 6D localization of seen objects | ||||||||||
Dataset | TUD-L | ||||||||||
Description | Submission prepared by Elliot Maître, Médéric Fourmy, Yann Labbé | ||||||||||
Evaluation scores |
|
User | agimus-happypose |
---|---|
Publication | Labbé et al.: MegaPose: 6D Pose Estimation of Novel Objects via Render & Compare, CoRL 2022 |
Implementation | https://github.com/agimus-project/happypose |
Training image modalities | RGB |
Test image modalities | RGB-D |
Description | This submission was prepared by Elliot Maître, Mederic Fourmy, Lucas Manuelli, Yann Labbé In this submission, GDRNPPDet_PBRReal, [A] detections (default detections for Task 1) are used as input to the MegaPose pose estimation method [B], without fine tuning on the core BOP challenge datasets. For each detection, we run the MegaPose coarse network with 5 hypotheses. Each of the top-5 hypotheses are refined using the refinement strategy described below. The refined hypotheses are then scored using the coarse network, and the best one is considered the pose estimate for the initial detection. The refinement strategy is as follows for one hypothesis. We first run 5 iteration of the RGB refinement network. We then render a depth map of the hypothesis and establish 3D-3D correspondences between points of the objects in the rendered depth and in the depth map using pixel-wise alignment. Finally, we run Teaser++ [C] to align the point clouds. The following improvements have been made over the original MegaPose paper [B]: [A] Liu et al.: https://github.com/shanice-l/gdrnpp_bop2022 This work was granted access to the HPC resources of IDRIS under the allocation 011014301 made by GENCI |
Computer specifications | NVIDIA Tesla V100 32Go |