Please use this identifier to cite or link to this item:
https://hdl.handle.net/2440/134089
Citations | ||
Scopus | Web of Science® | Altmetric |
---|---|---|
?
|
?
|
Type: | Conference paper |
Title: | Visual odometry revisited: what should be learnt? |
Author: | Zhan, H. Weerasekera, C.S. Bian, J.W. Reid, I. |
Citation: | IEEE International Conference on Robotics and Automation, 2020, pp.4203-4210 |
Publisher: | IEEE |
Publisher Place: | online |
Issue Date: | 2020 |
Series/Report no.: | IEEE International Conference on Robotics and Automation ICRA |
ISBN: | 9781728173955 |
ISSN: | 1050-4729 2577-087X |
Conference Name: | IEEE International Conference on Robotics and Automation (ICRA) (31 May 2020 - 31 Aug 2020 : Paris, France) |
Statement of Responsibility: | Huangying Zhan, Chamara Saroj Weerasekera, Jia-Wang Bian, Ian Reid |
Abstract: | In this work we present a monocular visual odometry (VO) algorithm which leverages geometry-based methods and deep learning. Most existing VO/SLAM systems with superior performance are based on geometry and have to be carefully designed for different application scenarios. Moreover, most monocular systems suffer from scale-drift issue. Some recent deep learning works learn VO in an end-to-end manner but the performance of these deep systems is still not comparable to geometry-based methods. In this work, we revisit the basics of VO and explore the right way for integrating deep learning with epipolar geometry and Perspective-n-Point (PnP) method. Speci cally, we train two convolutional neural networks (CNNs) for estimating single-view depths and twoview optical ows as intermediate outputs. With the deep predictions, we design a simple but robust frame-to-frame VO algorithm (DF-VO) which outperforms pure deep learningbased and geometry-based methods. More importantly, our system does not suffer from the scale-drift issue being aided by a scale consistent single-view depth CNN. Extensive experiments on KITTI dataset shows the robustness of our system and a detailed ablation study shows the effect of different factors in our system. Code is available at here: DF-VO. |
Rights: | © 2020 IEEE |
DOI: | 10.1109/ICRA40945.2020.9197374 |
Grant ID: | http://purl.org/au-research/grants/arc/FL130100102 http://purl.org/au-research/grants/arc/CE140100016 |
Published version: | http://dx.doi.org/10.1109/icra40945.2020.9197374 |
Appears in Collections: | Electrical and Electronic Engineering publications |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.