Submission: ZebraPoseSAT-EffnetB4 (DefaultDet+PBR_Only)/ITODD

Download submission
Submission name
Submission time (UTC) Oct. 16, 2022, 8:02 p.m.
User zebrapose
Task Model-based 2D segmentation of seen objects
Dataset ITODD
Training model type Default
Training image type Synthetic (only PBR images provided for BOP Challenge 2020 were used)
Description
Evaluation scores
AP:0.352
AP50:0.578
AP75:0.389
AP_large:0.358
AP_medium:0.300
AP_small:-1.000
AR1:0.155
AR10:0.418
AR100:0.420
AR_large:0.430
AR_medium:0.299
AR_small:-1.000
average_time_per_image:0.080

Method: ZebraPoseSAT-EffnetB4 (DefaultDet+PBR_Only)

User zebrapose
Publication ZebraPose: Coarse to Fine Surface Encoding for 6DoF Object Pose Estimation, CVPR2022
Implementation https://github.com/suyz526/ZebraPose
Training image modalities RGB
Test image modalities RGB
Description

Based on the paper "ZebraPose: Coarse to Fine Surface Encoding for 6DoF Object Pose Estimation", CVPR 2022.

  • Training images: PBR image

  • Setting: One network per object was trained

  • 2D Bounding box: default detections ("pbr" version) provided by the BOP organizers.

  • Modifications to the original ZebraPose paper:

    1. Added Symmetry-Aware Training (SAT). The network and loss functions are not changed. There will be a new ground truth for the sym. objects, details can be found in the Github Repository. Special thanks to Yongliang Lin for his contribution.

    2. We replace the Resnet34 backbone with EffnetB4 in ZebraPose. (Only replace the backbone in the pose estimation part)

  • About the submission to segmentation challenge 2022: For every 2D bounding box provided by a 2D detector, we use ZebraPose network to infer the object visible mask and binary codes (as we did for object pose estimation). And we save 1) the confidence score from the 2D detector 2) as well as the predicted visible object mask into the json file for the segmentation evaluation.

  • The reported inference time included 2D detection time.

List of contributors:

  • German Research Center for Artificial Intelligence (DFKI), Augmented Vision department:

Yongzhi Su, Praveen Nathan, Torben Fetzer, Jason Rambach, Didier Stricker

  • Technical University Munich (TUM), CAMPAR:

Mahdi Saleh, Yan Di, Nassir Navab, Benjamin Busam, Federico Tombari

  • Zhejiang University (ZJU):

Yongliang Lin, Yu Zhang

Computer specifications Intel(R) Xeon(R) E-2146G CPU @ 3.50GHz, Nvidia RTX2080Ti