The realistic representation of deformations is still an active area of research, especially for soft objects whose behavior cannot be simply described in terms of elasticity parameters. Most of existing techniques assume that the elasticity parameters describing the object behavior are known a priori based on assumptions on the object material, such as its isotropy or homogeneity, or values for these parameters are chosen by manual tuning until the results seem plausible. This is a subjective process and cannot be employed where accuracy is expected. This paper proposes a data-driven neural-network-based model for capturing implicitly deformations of a soft object, without requiring any knowledge on the object material. Visual data, in form of 3D point clouds gathered by a Kinect sensor, is collected over an object while forces are exerted by means of the probing tip of a force-torque sensor. A novel approach advantageously combining distance-based clustering, stratified sampling and neural gas-tuned mesh simplification is then proposed to describe the particularities of the deformation. The compact representation of the object is denser in the region of the deformation (an average of 97% perceptual similarity with the collected data), while still preserving the object overall shape (71% similarity over the entire surface) and only using on average 30% of the number of vertices in the mesh.
Previous Article in event
Previous Article in session
Next Article in event
Next Article in session
Data-Driven Representation of Soft Deformable Objects From Force-Torque Sensor Data and 3D Vision Measurements
Published:
14 November 2016
by MDPI
in 3rd International Electronic Conference on Sensors and Applications
session Applications
Abstract:
Keywords: Index Terms— Deformation, force-torque sensor, Kinect, RGB-D data, neural gas, clustering, mesh simplification, stratified sampling, 3D object modeling;