| Submission name | FoundPose-Coarse | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| Submission time (UTC) | Jan. 27, 2024, 11:23 a.m. | ||||||||||
| User | epi | ||||||||||
| Task | Model-based 6D localization of unseen objects | ||||||||||
| Dataset | LM-O | ||||||||||
| Training model type | Default | ||||||||||
| Training image type | None | ||||||||||
| Description | |||||||||||
| Evaluation scores |
|
| User | epi |
|---|---|
| Publication | |
| Implementation | |
| Training image modalities | RGB |
| Test image modalities | RGB |
| Description | The presented results were achieved by the refinement-free version of FoundPose (row 1 of Table 1 in [A]). In this submission, FoundPose uses default CNOS-FastSAM [B] segmentations provided for BOP'23. For pose estimation, the method uses features from layer 18 of DINOv2 (ViT-L) with registers [C]. Note that FoundPose doesn't do any task-specific training -- it only uses frozen FastSAM (via CNOS) and frozen DINOv2. [A] Anonymous: FoundPose: Unseen Object Pose Estimation with Foundation Features. [B] Nguyen et al.: CNOS: A Strong Baseline for CAD-based Novel Object Segmentation, ICCVW 2023. [C] Darcet et al.: Vision transformers need registers, arXiv 2023. |
| Computer specifications | Tesla P100 16GB |