Abstrakt: |
Multifingered robot hands can be extremely effective in physically exploring and recognizing objects, especially if they are extensively covered with distributed tactile sensors. Convolutional neural networks (CNNs) have been proven successful in processing high dimensional data, such as camera images, and are, therefore, very well suited to analyze distributed tactile information as well. However, a major challenge is to organize tactile inputs coming from different locations on the hand in a coherent structure that could leverage the computational properties of the CNN. Therefore, we introduce a morphology-specific CNN (MS-CNN), in which hierarchical convolutional layers are formed following the physical configuration of the tactile sensors on the robot. We equipped a four-fingered Allegro robot hand with several uSkin tactile sensors; overall, the hand is covered with 240 sensitive elements, each one measuring three-axis contact force. The MS-CNN layers process the tactile data hierarchically: at the level of small local clusters first, then each finger, and then the entire hand. We show experimentally that, after training, the robot hand can successfully recognize objects by a single touch, with a recognition rate of over 95%. Interestingly, the learned MS-CNN representation transfers well to novel tasks: by adding a limited amount of data about new objects, the network can recognize nine types of physical properties. |