Humans can feel, weigh, and grasp diverse objects, and simultaneously infer their material properties while applying the right amount of force – a challenging set of tasks for a modern robot. Mechanoreceptor networks that provide sensory feedback and enable the dexterity seen in the human grasp remain challenging to imitate in manmade robots. While computer vision-based robot grasping strategies have progressed significantly with the abundance of visual data and emerging machine learning tools, there are yet no equivalent sensing platforms and large-scale datasets to utilize tactile information that humans rely on for grasping objects. Importantly, the inability to record and analyse tactile signals currently limits our understanding of the role of tactile information in the human grasp itself – e.g., how tactile maps are used to identify objects and infer their properties is unknown. Here we demonstrate – using a scalable tactile glove (STAG) and deep convolutional neural networks (CNNs) – that sensors uniformly distributed over the hand can be used to identify individual objects, estimate weights and explore the typical tactile patterns that emerge while grasping objects. The sensor array (548 sensors) is assembled on a knitted glove, and consists of a piezoresistive film connected by a network of conductive thread electrodes that are passively probed. Using the low-cost STAG sensor array (~ $10), we record a large-scale tactile dataset with 135,000 frames, each covering the full hand, while interacting with 26 different objects. The collective set of interactions with different objects explain the key correspondences between different regions of the hand while manipulating objects. Insights from the tactile signatures of the human grasp – through the lens of a manmade analogue of the natural mechanoreceptor network – can aid the future design of new prosthetics, robot grasping tools and human-robot interactions.
Learning the signatures of the human grasp using a scalable tactile glove Nature, 569 (7758), 2019 |
@article{ SSundaram:2019:STAG, author = {Sundaram, Subramanian and Kellnhofer, Petr and Li, Yunzhu and Zhu, Jun-Yan and Torralba, Antonio and Matusik, Wojciech}, title = {Learning the signatures of the human grasp using a scalable tactile glove}, journal={Nature}, volume={569}, number={7758}, year={2019}, publisher={Nature Publishing Group}, doi = {10.1038/s41586-019-1234-z} }
All the data and code are free under a license for non-commercial use. Contact us with inquiries about commercial use.
The schematic and Gerber files for the printed circuit board needed to read out data from our sensor.
Circuit (605 KiB)
schematic of the readout electronics.
Gerber files (154 KiB)
for manufacturing the PCB.
|
Scalable Tactile Glove (STAG) Datasets are used in our paper for object classification, weight estimation and hand pose discrimination. These datasets are needed to run our code and reproduce of the results in our paper. Note that pressure data alone (i.e., without images) is sufficient for running our code.
Classification dataset (13.6 GiB)
with content description.
Pressure data only (36.6 MiB)
A smaller package without images. Sufficient for our code.
|
Blind-folded dataset (413 MiB)
for blind-folded classification tests (content description).
Pressure data only (1.58 MiB)
|
Weight prediction dataset (1.06 GiB)
with content description.
Pressure data only (2.85 MiB)
|
Hand pose dataset (2.67 GiB)
for hand pose discrimination (content description).
Pressure data only (5.63 MiB)
|
The source code for our machine learning based object classification and weight estimation methods is available on GitHub. The algorithms are implemented in Python and require the Pytorch machine learning framework. Note that in addition to the code the datasets above are required.
Please contact the corresponding author Subramanian Sundaram with inquiries or send us an e-mail to info@humangrasp.io with any other questions.
© 2019 The Authors. The author's version of the work is posted here for your personal use. Not for redistribution.