Results of the 2019 and 2020 editions of the challenge are published in:
T. Hodaň,
M. Sundermeyer,
B. Drost,
Y. Labbé,
E. Brachmann,
F. Michel,
C. Rother,
J. Matas,
BOP Challenge 2020 on 6D Object Localization, ECCVW 2020
[PDF,
SLIDES,
BIB]
When referring to the BlenderProc4BOP renderer, please cite:
M. Denninger, M. Sundermeyer, D. Winkelbauer, D. Olefir, T. Hodaň, Y. Zidan, M. Elbadrawy, M. Knauer, H. Katam, A. Lodhi,
BlenderProc: Reducing the Reality Gap with Photorealistic Rendering, RSS Workshops 2020
[PDF,
CODE,
VIDEO,
BIB]
As the 2019 edition, the 2020 edition of the BOP Challenge focuses on pose estimation of specific rigid objects. The 2019 and 2020 editions follow the same task definition, evaluation methodology, list of core datasets and instructions for participation, which are described on the page about the 2019 edition. This page only describes updates introduced in the 2020 edition. The 2019, 2020 and 2022 editions share the same leaderboard and the submission form for this leaderboard stays open to allow comparison with upcoming methods.
Photorealistic training images: In 2020, the challenge is focused on the synthesis of effective RGB training images. While learning from synthetic data has been common for depth-based pose estimation methods, the same is still difficult for RGB-based methods where the domain gap between synthetic training and real test images is more severe. Specifically for the challenge, we have therefore prepared BlenderProc4BOP, an open-source and light-weight physically-based renderer (PBR), and used it to render training images for each of the seven core datasets. With this addition, we hope to reduce the entry barrier of the challenge for participants working on learning-based RGB and RGB-D solutions. We are excited to see whether photorealistic training images will help to close the performance gap between depth-based and RGB-based methods that we have observed in the 2019 edition of the challenge. Participants are encouraged to build on top of the renderer and release their extensions.
Short papers in the ECCV 2020 workshop proceedings: Participants of the 2020 edition have the opportunity to document their methods by submitting a short paper to the 6th Workshop on Recovering 6D Object Pose (ECCV 2020). The paper will be reviewed by the organizational committee and accepted if it presents a method with competitive results or distinguishing features. The paper must have exactly 4 pages including references and, if accepted, will be published in the ECCV workshop proceedings. Note that besides the short papers, the workshop invites submissions of full papers (14 pages excluding references) on any topic related to 6D object pose estimation.
In the 2020 edition, we additionally provide 50K photorealistic training images for each of the seven core datasets: LM/LM-O, T-LESS, TUD-L, IC-BIN, ITODD, HB, YCB-V. The images were rendered with BlenderProc4BOP, an open-source and light-weight physically-based renderer (PBR) prepared for the BOP Challenge 2020. The renderer implements a synthesis approach similar to ObjectSynth. The objects are thrown inside a cube using physics simulations. A rich spectrum of the background is ensured by assigning a random PBR material from the CC0 Textures library to the walls of the cube. Example images are below.
The conditions which a method needs to fulfill in order to qualify for the awards are the same as in BOP Challenge 2019.
The instructions for participation and the submission form are the same as in BOP Challenge 2019.
Tomáš Hodaň, Czech Technical University in Prague
Martin Sundermeyer, DLR German Aerospace Center
Eric Brachmann, Heidelberg University
Bertram Drost, MVTec
Frank Michel, Technical University Dresden
Jiří Matas, Czech Technical University in Prague
Carsten Rother, Heidelberg University