Submission: MUSE/HANDAL/full

Download submission
Submission name full
Submission time (UTC) Nov. 16, 2024, 2:33 a.m.
User csm8167
Task Model-based 2D detection of unseen objects
Dataset HANDAL
Description
Evaluation scores
AP:0.270
AP50:0.360
AP75:0.286
AP_large:0.307
AP_medium:0.125
AP_small:0.001
AR1:0.402
AR10:0.421
AR100:0.421
AR_large:0.493
AR_medium:0.145
AR_small:0.001
average_time_per_image:1.721

Method: MUSE

User csm8167
Publication Not Yet
Implementation
Training image modalities None
Test image modalities RGB
Description

Submitted to: BOP Challenge 2024

MUSE: Model-agnostic Unseen 2D Object Recognition via 3D-aware Similarity of Multi-Embeddings We present MUSE, a training-free and model-agnostic framework for unseen 2D object recognition, leveraging 3D-aware similarity computed from multi-embedding descriptors.

Specifically, MUSE integrates class-level and patch-level embeddings into a novel similarity metric, and introduces the Integrated von Mises-Fisher (I-vMF) similarity, which applies the von Mises-Fisher (vMF) distribution to weigh the contributions of 3D template views. This weighting reflects the assumption that high similarity scores are concentrated around the correct template view on the viewing sphere.

To further enhance reliability, we propose Confidence-Assisted Similarity (CAS), which modulates the I-vMF similarity using the uncertainty estimate of the vision model, giving more influence to confident predictions.

As our approach relies solely on similarity computations over feature embeddings, MUSE is fully model-agnostic and can be integrated with any vision backbone without fine-tuning.

In our implementation, we use Grounding DINO and SAM2 to extract detection proposals, and adopt DINOv2-Large as the feature encoder for computing multi-level similarity.

Authors – Temporary Anonymous

Computer specifications rtx4090