Multistage Localization for High Precision Mobile Manipulation Tasks

TR Number
Date
2017-03-03
Journal Title
Journal ISSN
Volume Title
Publisher
Virginia Tech
Abstract

This paper will present a multistage localization approach for an autonomous industrial mobile manipulator (AIMM). This approach allows tasks with an operational scope outside the range of the robot's manipulator to be completed without having to recalibrate the position of the end-effector each time the robot's mobile base moves to another position. This is achieved by localizing the AIMM within its area of operation (AO) using adaptive Monte Carlo localization (AMCL), which relies on the fused odometry and sensor messages published by the robot, as well as a 2-D map of the AO, which is generated using an optimization-based smoothing simultaneous localization and mapping (SLAM) technique. The robot navigates to a predefined start location in the map incorporating obstacle avoidance through the use of a technique called trajectory rollout. Once there, the robot uses its RGB-D sensor to localize an augmented reality (AR) tag in the map frame. Once localized, the identity and the 3-D position and orientation, collectively known as pose, of the tag are used to generate a list of initial feature points and their locations based on a priori knowledge. After the end-effector moves to the approximate location of a feature point provided by the AR tag localization, the feature point's location, as well as the end-effector's pose are refined to within a user specified tolerance through the use of a control loop, which utilizes images from a calibrated machine vision camera and a laser pointer, simulating stereo vision, to localize the feature point in 3-D space using computer vision techniques and basic geometry. This approach was implemented on two different ROS enabled robots, the Clearpath Robotics' Husky and the Fetch Robotics' Fetch, in order to show the utility of the multistage localization approach in executing two tasks which are prevalent in both manufacturing and construction: drilling and sealant application. The proposed approach was able to achieve an average accuracy of ± 1 mm in these operations, verifying its efficacy for tasks which have a larger operational scope than that of the range of the AIMM's manipulator and its robustness to general applications in manufacturing.

Description
Keywords
Autonomous Navigation, SLAM, Visual Servoing, Mobile Manipulation, Computer Vision, State Machine
Citation
Collections