HyKo-Dataset (2017)
===================

_by  Active Vision Group (University of Koblenz-Landau)_
[ICO]NameLast modifiedSizeDescription
[PARENTDIR]Parent Directory  -  
[DIR]HyKo1/2017-10-22 23:08 -  
[DIR]HyKo2/2017-10-22 23:04 -  
[DIR]HyKo3/2017-10-22 23:42 -  
[DIR]iosb/2018-11-06 19:20 -  
[DIR]misc/2017-11-06 19:19 -  
[DIR]qmini_ros/2017-10-22 23:38 -  
[   ]example.py2017-08-03 16:33 1.9K 
HyKo Dataset Read-Me
---------------------------

<a id=readme></a>


We present a novel dataset captured with compact, low-cost, snapshot mosaic (SSM) imaging cameras, which are able to capture a whole spectral cube in one shot. For the best of our knowledge its the first dataset in which hyperspectral data was recorded from a moving vehicle enabling hyperspectral scene analysis for road scene understanding. In total, we recorded several hours of traffic scenarios using a variety of sensor modalities such as hyperspectral cameras and 3D laser scanners. We captured and hand labeled diverse scenarios ranging from real-world traffic situations in city scenes to suburban areas. Our data is synchronized and annotated, containing semantic and material labels which allows training classifiers for scene understanding and autonomous driving. The data covers wavelengths from 400 to 1000 nm spanning the visible and near infrared spectral ranges. Here we describe our recording platforms, the data format and needed utilities to work with the data.

* HyKo 1: Contains drivability labels.
* HyKo 2: Contains drivability, semantic and spectrla reflectance labels.
* HyKo 3: Coming soon!

We are constantly working on expanding and improving our datasets.

Find more Information on: _https://wp.uni-koblenz.de/hyko/_

Last Update: 22.10.2017