| Submission name | |||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| Submission time (UTC) | Aug. 18, 2020, 7:11 p.m. | ||||||||||
| User | yann_labbe | ||||||||||
| Task | Model-based 6D localization of seen objects | ||||||||||
| Dataset | ITODD | ||||||||||
| Training model type | Default | ||||||||||
| Training image type | Synthetic (only PBR images provided for BOP Challenge 2020 were used) | ||||||||||
| Description | |||||||||||
| Evaluation scores |
|
| User | yann_labbe |
|---|---|
| Publication | Labbé et al, CosyPose: Consistent multi-view multi-object 6D pose estimation, ECCV 2020 |
| Implementation | https://github.com/ylabbe/cosypose |
| Training image modalities | RGB |
| Test image modalities | RGB |
| Description | The method is the same as CosyPose-ECCV20-PBR-1VIEW but we also add the additionnal real and synthetic images to the training data when an official training split is available: TUD-L, T-LESS and YCB-Video. On other datasets, the results reported are the same as CosyPose-ECCV20-1VIEW-PBR. The models (detectors, coarse pose estimation, refiner) are pre-trained from the models trained on PBR images only. |
| Computer specifications | CPU: 20-core Intel Xeon 6164 @ 3.2 GHz, GPU: Nvidia V100 |