Submission: ZebraPoseSAT-EffnetB4 (DefaultDet+PBR_Only)/T-LESS

Download submission
Submission name
Submission time (UTC) Oct. 16, 2022, 8 p.m.
User zebrapose
Task Model-based 2D segmentation of seen objects
Dataset T-LESS
Training model type CAD
Training image type Synthetic (only PBR images provided for BOP Challenge 2020 were used)
Description
Evaluation scores
AP:0.655
AP50:0.860
AP75:0.796
AP_large:0.706
AP_medium:0.592
AP_small:0.062
AR1:0.625
AR10:0.714
AR100:0.714
AR_large:0.804
AR_medium:0.599
AR_small:0.063
average_time_per_image:0.080

Method: ZebraPoseSAT-EffnetB4 (DefaultDet+PBR_Only)

User zebrapose
Publication ZebraPose: Coarse to Fine Surface Encoding for 6DoF Object Pose Estimation, CVPR2022
Implementation https://github.com/suyz526/ZebraPose
Training image modalities RGB
Test image modalities RGB
Description

Based on the paper "ZebraPose: Coarse to Fine Surface Encoding for 6DoF Object Pose Estimation", CVPR 2022.

  • Training images: PBR image

  • Setting: One network per object was trained

  • 2D Bounding box: default detections ("pbr" version) provided by the BOP organizers.

  • Modifications to the original ZebraPose paper:

    1. Added Symmetry-Aware Training (SAT). The network and loss functions are not changed. There will be a new ground truth for the sym. objects, details can be found in the Github Repository. Special thanks to Yongliang Lin for his contribution.

    2. We replace the Resnet34 backbone with EffnetB4 in ZebraPose. (Only replace the backbone in the pose estimation part)

  • About the submission to segmentation challenge 2022: For every 2D bounding box provided by a 2D detector, we use ZebraPose network to infer the object visible mask and binary codes (as we did for object pose estimation). And we save 1) the confidence score from the 2D detector 2) as well as the predicted visible object mask into the json file for the segmentation evaluation.

  • The reported inference time included 2D detection time.

List of contributors:

  • German Research Center for Artificial Intelligence (DFKI), Augmented Vision department:

Yongzhi Su, Praveen Nathan, Torben Fetzer, Jason Rambach, Didier Stricker

  • Technical University Munich (TUM), CAMPAR:

Mahdi Saleh, Yan Di, Nassir Navab, Benjamin Busam, Federico Tombari

  • Zhejiang University (ZJU):

Yongliang Lin, Yu Zhang

Computer specifications Intel(R) Xeon(R) E-2146G CPU @ 3.50GHz, Nvidia RTX2080Ti