The datasets include 3D object models and training and test RGB-D images annotated with ground-truth 6D object poses and intrinsic camera parameters.

The 3D object models were created manually or using KinectFusion-like systems for 3D surface reconstruction. The training images show individual objects from different viewpoints and are either captured by an RGB-D/Gray-D sensor or obtained by rendering of the 3D object models. The test images were captured in scenes with graded complexity, often with clutter and occlusion.

The datasets are provided in the BOP format. The BOP toolkit expects all datasets to be stored in the same folder, each dataset in a subfolder named with the base name of the dataset (e.g. "lm", "lmo", "tless").

An example showing how to download and unpack the LM dataset from bash (names of archives with the other datasets can be seen in the download links below):

export SRC=
wget $SRC/         # Base archive with dataset info, camera parameters, etc.
wget $SRC/       # 3D object models.
wget $SRC/     # All test images ("_bop19" for a subset used in the BOP Challenge 2019/2020).
wget $SRC/    # PBR training images (rendered with BlenderProc4BOP).

unzip             # Contains folder "lm".
unzip -d lm     # Unpacks to "lm".
unzip -d lm   # Unpacks to "lm".
unzip -d lm  # Unpacks to "lm".

LM (Linemod)

Hinterstoisser et al.: Model based training, detection and pose estimation of texture-less 3d objects in heavily cluttered scenes, ACCV 2012.

15 texture-less household objects with discriminative color, shape and size. Each object is associated with a test image set showing one annotated object instance with significant clutter but only mild occlusion.

LM-O (Linemod-Occluded)

Brachmann et al.: Learning 6d object pose estimation using 3d object coordinates, ECCV 2014.

Provides additional ground-truth annotations for all modeled objects in one of the test sets from LM. This introduces challenging test cases with various levels of occlusion. Note the PBR-BlenderProc4BOP training images are the same as for LM.


Hodan et al.: T-LESS: An RGB-D Dataset for 6D Pose Estimation of Texture-less Objects, WACV 2017, project website.

30 industry-relevant objects with no significant texture or discriminative color. The objects exhibit symmetries and mutual similarities in shape and/or size, and a few objects are a composition of other objects. Test images originate from 20 scenes with varying complexity. Only images from Primesense Carmine 1.09 are included in the archives below. Images from Microsoft Kinect 2 and Canon IXUS 950 IS are available at the project website. However, only the Primesense images can be used in the BOP Challenge 2019/2020.


Drost et al.: Introducing MVTec ITODD - A Dataset for 3D Object Recognition in Industry, ICCVW 2017, project website.

28 objects captured in realistic industrial setups with a high-quality Gray-D sensor. The ground-truth 6D poses are publicly available only for the validation images, not for the test images.

HB (HomebrewedDB)

Kaskman et al.: HomebrewedDB: RGB-D Dataset for 6D Pose Estimation of 3D Objects, ICCVW 2019, project website.

33 objects (17 toy, 8 household and 8 industry-relevant objects) captured in 13 scenes with varying complexity. The ground-truth 6D poses are publicly available only for the validation images, not for the test images. The dataset includes images from Primesense Carmine 1.09 and Microsoft Kinect 2. Note that only the Primesense images can be used in the BOP Challenge 2019/2020.

The dataset is provided under the CC0 1.0 Universal license.

YCB-V (YCB-Video)

Xiang et al.: PoseCNN: A Convolutional Neural Network for 6D Object Pose Estimation in Cluttered Scenes, RSS 2018, project website.

21 objects from the YCB dataset captured in 92 videos with 133,827 frames. For the BOP Challenge 2019, 75 images with higher-quality ground-truth poses were manually selected from each of the 12 test videos. The selected images are a subset of images listed in "YCB_Video_Dataset/image_sets/keyframe.txt" in the original dataset. The 80K synthetic training images included in the original dataset are also provided.

RU-APC (Rutgers APC)

Rennie et al.: A dataset for improved RGBD-based object detection and pose estimation for warehouse pick-and-place, Robotics and Automation Letters 2016.

14 textured products from the Amazon Picking Challenge 2015 [6], each associated with test images of a cluttered warehouse shelf.

IC-BIN (Doumanoglou et al.)

Doumanoglou et al.: Recovering 6D Object Pose and Predicting Next-Best-View in the Crowd, CVPR 2016.

Test images of two objects from IC-MI, which appear in multiple locations with heavy occlusion in a bin-picking scenario.

IC-MI (Tejani et al.)

Tejani et al.: Latent-class hough forests for 3D object detection and pose estimation, ECCV 2014.

Two texture-less and four textured household objects. The test images show multiple object instances with clutter and slight occlusion.

TUD-L (TUD Light)

Hodan, Michel et al.: BOP: Benchmark for 6D Object Pose Estimation, ECCV 2018.

Training and test image sequences show three moving objects under eight lighting conditions.

TYO-L (Toyota Light)

Hodan, Michel et al.: BOP: Benchmark for 6D Object Pose Estimation, ECCV 2018.

21 objects, each captured in multiple poses on a table-top setup, with four different table cloths and five different lighting conditions.

The thumbnails of the datasets were obtained by rendering colored 3D object models in the ground truth 6D poses over darkened test images.