Visual servoing for fine grasping of deformable objects

Autor: Furlan, Serena
Přispěvatelé: Alenyà, Guillem, Angulo, Cecilio, European Commission
Rok vydání: 2021
Zdroj: Digital.CSIC. Repositorio Institucional del CSIC
instname
Popis: Trabajo fin de máster presentado en la Universidad Politécnica de Cataluña, Master en Ingeniería de la automatización.--2021-02-09
This Master Thesis aims to investigate and assess different grasping strategies to allow a robot to grasp folded textiles. In general, the difficulty of the problem of grasping textiles is due to the variety of possible fabrics and patterns of the garments and to the high degree of achievable deformation shapes. The former implies complexity in perception, the latter causes difficulties in reconstructing the 3D model and predicting the behaviour of the objects. This project focuses on the final approach motion that enables the grasping of the garment. Compared to the pre-grasp motion (first approach motion of the gripper to the garment), this is a challenging part that requires more precision from vision and more control accuracy. Moreover, the first part of the movement has already been developed in other projects that rely on hand-eye calibration and classical kinematics. In particular, we are interested in providing the robot with a suitable strategy to grasp only the top layer of a folded garment, so we need a precision from vision that is not guaranteed by the camera on the robot head. Therefore we embedded an endoscopic camera in the robot hand in order to have a camera with a better point of view and mobile. After having evaluated different techniques, like visual servoing, line detection and visual tracking, an approach based on line detection has been developed. This method is composed of a vision part followed by a control part. The vision phase exploits simple segmentation techniques like Canny edge detection and Hough transform in order to speed up the processing of the image and consequently the entire procedure. The control phase exploits the information coming from vision to elaborate new control messages sent to the Whole Body Controller (WBC) of the robot. These messages contain the new position in which we want to send the arm-tool link in order to approach carefully the garment. Finally we have performed evaluation experiments using a TIAGo mobile manipulation robot in the Perception and Manipulation laboratory at IRI, a laboratory that simulates an apartment. Specifically, we have shown that despite the low precision of the WBC of the robot the closed-loop procedure designed works correctly with various types of folded garments, with an arbitrary number of layers, laying on different surfaces and under different lighting conditions
Databáze: OpenAIRE