Drawing inspiration from the human vision-touch interaction that demonstrates the ability of vision in assisting tactile manipulation tasks, this paper addresses the issue of 3D object recognition from tactile data whose acquisition is guided by visual information. An improved computational visual attention model is initially applied on images collected from multiple viewpoints over the surface of an object to identify regions that attract visual attention. Information about color, intensity, orientation, symmetry, curvature, contrast and entropy are advantageously combined for this purpose. Interest points are then extracted from these regions of interest using an innovative iterative technique that takes into consideration the best viewpoint of the object. As the movement and positioning of the tactile sensor to probe the object surface at the identified interest points take generally a long time, the local data acquisition is first simulated to choose the most promising approach to interpret it. To enable the object recognition, the tactile images are analyzed with the help of various classifiers. A method based on similarity is employed to select the best candidate tactile images to train the classifier. Among the tested algorithms, the best performance is achieved using the k-nearest neighbor classifier (87.89% for 4 objects and 75.82% for 6 objects). The proposed solution is then validated on real tactile data collected using a piezo-resistive tactile sensor array. The best performance obtained using the same classifier is of 72.25% for 4 objects and of 67.23% for 6 objects.
A 3D Visual Attention Model to Guide Tactile Data Acquisition for Object Recognition
Published: 14 November 2017 by MDPI in 4th International Electronic Conference on Sensors and Applications session Applications
Keywords: tactile sensing, tactile sensor array, visual attention, interest points, object recognition