Perception Dataset

データセットキット

A robust collection of raw sensor data

Our automated vehicles are equipped with an in-house sensor suite that collects raw sensor data on other cars, pedestrians, traffic lights, and more. This dataset features the raw lidar and camera inputs collected by our automated fleet within a bounded geographic area. It includes:

100万
3D ANNOTATIONS

3万
LIDAR POINT CLOUDS

350超
SCENES AT 60-90 MINUTES LONG

EXPLORE

EXPLORE

Perception dataset sample

Annotations provided by scale

センサーから取得された可視化データ

Learn More

A deeper look at our perception systems

Lidar data visualization

Get a sense for how our lidars perceive the world around them with a top-down view of the data they collect.

EXPLORE

Perception dataset sample

Annotations provided by scale

センサーから取得された可視化データ

Learn More

A deeper look at our perception systems

Lidar data visualization

Get a sense for how our lidars perceive the world around them with a top-down view of the data they collect.

GET STARTED

GET STARTED

Example perception solution

Our example solution follows a single-shot, top-down, U-net neural network segmentation architecture that was trained on the lidar portion of the dataset. The rasterization uses the HD semantic map and projected lidar point cloud to show the state around the vehicle. You can use this example solution as a starting point for your own experimentation.

車両センサーから取得された可視化データ

DATA FORMAT

We use the familiar nuScenes format for our dataset to ensure compatibility with previous work. We’ve also customized the nuScenes devkit and included instructions on how to use it.

Download the perception dataset kit

DOWNLOAD

1. Download the Dataset

It’s made up of three subsets:

2. Download Lyft’s Version of the nuScenes SDK

Our custom SDK reads the data. Visit our README.md setup instructions.

3. Download the Example Perception Solution for Reference

Our example solution will give you a starting point for experimentation.

CITATION INSTRUCTION

If you use the dataset for scientific work, please cite the following:

@misc

{Woven Planet Holdings, Inc. 2019,

title = {Woven Planet Perception Dataset 2020},

author = {Kesten, R. and Usman, M. and Houston, J. and Pandya, T. and Nadhamuni, K. and Ferreira, A. and Yuan, M. and Low, B. and Jain, A. and Ondruska, P. and Omari, S. and Shah, S. and Kulkarni, A. and Kazakova, A. and Tao, C. and Platinsky, L. and Jiang, W. and Shet, V.},

year = {2019},

howpublished = {\url{https://woven.toyota/en/perception-dataset}}

}

LICENSING INFORMATION

The downloadable “Woven by Toyota Perception Dataset” and included materials are ©2020 Woven Planet Holdings, Inc., and licensed under version 4.0 of the Creative Commons Attribution-NonCommercial-ShareAlike license (CC-BY-NC-SA-4.0).

The HD map included with the dataset was developed using data from the OpenStreetMap database which is ©OpenStreetMap contributors and available under the ODbL-1.0 license.

The nuScenes devkit was previously published by nuTonomy under the Creative Commons Attribution-NonCommercial-ShareAlike license (CC-BY-NC-SA-4.0), but is currently published under the Apache license version 2.0. Lyft’s forked nuScenes devkit has been modified for use with the Woven by Toyota AV dataset. Lyft’s modifications are ©2020 Lyft, Inc., and licensed under version 4.0 of the Creative Commons Attribution-NonCommercial-ShareAlike license (CC-BY-NC-SA-4.0).