The blind-folded classification dataset

This is description of our blind-folded classification dataset which was used to test generalization of the object identification neural network in our paper Learning the signatures of the human grasp using a scalable tactile glove trained in visually aware conditions to conditions without visual awarness of the grasped object. See "Methods » Dataset acquisition methods » Visually aware and blindfolded conditions in the paper".

The dataset is provided for non-commercial use only. Contact us with inquiries about commercial use. Read the license for more details.

Dataset content

The zip file of the dataset contains following structure:

|
|- metadata.mat						
|- [batch]
|     |- [recording]
|          |- [000000...N].jpg
|          |- viz
|          |    |- [000000...N].jpg
|          |- pressuredata.mat
			

Where:

Metadata structure

The metadata.mat contains all the pressure values for all recordings as well as all the flags generated in the preprocessing step to filter valid frames. More details in the Methods section of our paper. The file is a matlab MAT-file. Note, that it follows C (zero-based) indexing (unlike Matlab). This means that in Matlab you have to access dictionaries with 1 additon, e.g. batches{batchId(55) + 1}.

The file contains:


This information is also useful to associate the video image with respective pressure frame. To read an image associated with frames stored on index 123 in Matlab do:

load('metadata.mat');
im = imread(fullfile(batches{batchId(123) + 1}, recordings{recordingId(123) + 1}, sprintf('%06d.jpg', frame(123))));
fprintf('Showing object "%s".\n', objects{objectId(123) + 1});
imshow(im);




© 2019 The Authors. The author's version of the work is posted here for your personal use. Not for redistribution.