Submission name | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Submission time (UTC) | Dec. 27, 2021, 6:11 a.m. | ||||||||||
User | YishengHe | ||||||||||
Task | 6D localization of seen objects | ||||||||||
Dataset | YCB-V | ||||||||||
Training model type | Default | ||||||||||
Training image type | Synthetic + real | ||||||||||
Description | We use the BOP real and synthesis data for training. The predicted result is not refined by any iterative refinement algorithms, i.e., ICP. Differences from the original implementation: - We regenerate object SIFT-FPS 3D keypoint for each object from the BOP YCBV object mesh models. Because the object coordinate in the coordinate system of BOP benchmark dataset is different from the original dataset. | ||||||||||
Evaluation scores |
|
User | YishengHe |
---|---|
Publication | Yisheng He et al.: FFB6D: A Full Flow Bidirectional Fusion Network for 6D Pose Estimation, CVPR 2021 (Oral) |
Implementation | https://github.com/ethnhe/FFB6D |
Training image modalities | RGB-D |
Test image modalities | RGB-D |
Description | FFB6D is a full flow bidirectional fusion network designed for 6D pose estimation from a single RGBD image. We use the BOP-provided real and synthesis images for training. No iterative refinement algorithm is applied. |
Computer specifications | CPU: Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz; GPU: GeForce RTX 2080Ti |