Submission name | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Submission time (UTC) | Sept. 26, 2023, 6:44 p.m. | ||||||||||
User | acaraffa@fbk.eu | ||||||||||
Task | Model-based 6D localization of unseen objects | ||||||||||
Dataset | HB | ||||||||||
Description | |||||||||||
Evaluation scores |
|
User | acaraffa@fbk.eu |
---|---|
Publication | |
Implementation | |
Training image modalities | None |
Test image modalities | RGB-D |
Description | Submitted to: BOP Challenge 2023 Training data: 3D point clouds of indoor scenes from the 3DMatch dataset [A] Onboarding data: No need for onboarding Used 3D models: Default point clouds provided in: "models_reconst" for T-LESS, "models_eval" for ITODD, "models" for the other datasets. Notes: PoZe performs pose estimation of unseen objects through zero-shot learning. PoZe takes as input a colored 3D point cloud that represents the object and an RGB-D image capturing the scene. PoZe consists of five modules:
[A] Zeng et al.: 3DMatch: Learning the matching of local 3D geometry in range scans, CVPR 2017 |
Computer specifications | A40 |