Title:
Multiple-hypothesis vision-based landing autonomy

Thumbnail Image
Author(s)
Nakamura, Takuma
Authors
Advisor(s)
Johnson, Eric N.
Tsiotras, Panagiotis
Advisor(s)
Editor(s)
Associated Organization(s)
Supplementary to
Abstract
Unmanned aerial vehicles (UAVs) need humans in the mission loop for many tasks, and landing is one of the tasks that typically involves a human pilot. This is because of the complexity of a maneuver itself and flight-critical factors such as recognition of a landing zone, collision avoidance, assessment of landing sites, and decision to abort the maneuver. Another critical aspect to be considered is the reliance of UAVs on GPS (global positioning system). A GPS system is not a reliable solution for landing in some scenarios (e.g. delivering a package in an urban city, and a surveillance UAV repatriating a home ship with the jammed signals), and a landing solely based on a GPS extremely decreases the UAV operation envelope. Vision is promising to achieve fully autonomous landing because it is a rich-sensing, light, affordable device that functions without any external resource. Although vision is a powerful tool for autonomous landing, the use of vision for state estimation requires extensive consideration. Firstly, vision-based landing faces a problem of occlusion. The target detected at a high altitude would be lost at certain altitudes while a vehicle descends; however, a small visual target can not be recognized at high altitude. Second, standard filtering methods such as extended Kalman filter (EKF) face difficulty due to the complex dynamics of the measurement error created due to the discrete pixel space, conversion from the pixel to physical units, the complex camera model, and complexity of detection algorithms. The vision sensor produces an unfixed number of measurements with each image, and the measurements may include false positives. Plus, the estimation system is excessively tasked in realistic conditions. The landing site would be moving, tilted, or close to an obstacle. The available landing location may not be limited to one. In addition to assessing these statuses, understanding the confidence of the estimations is also the tasks of the vision, and the decisions to initiate, continue, and abort the mission are made based on the estimated states and confidence. The system that handles those issues and consistently produces the navigation solution while a vehicle lands eliminates one of the limitations of the autonomous UAV operation. This thesis presents a novel state estimation system for UAV landing. In this system, vision data is used to both estimate the state of the vehicle and map the state of the landing target (position, velocity, and attitude) within the framework of simultaneous localization and mapping (SLAM). Using the SLAM framework, the system becomes resilient to a loss of GPS and other sensor failures. A novel vision algorithm that detects a portion of the marker is developed, and the stochastic properties of the algorithm are studied. This algorithm extends the detectable range of the vision system for any known marker. However, this vision algorithm produces highly nonlinear, non-Gaussian, and multi-modal error distribution, and a naive implementation of filters would not accurately estimate the states. A vision-aided navigation algorithm is derived within extended Kalman particle filter (PF-EKF) and Rao-Blackwellized particle filter (RBPF) frameworks in addition to a standard EKF framework. These multi-hypothesis approaches not only deal well with a highly nonlinear and non-Gaussian distribution of the measurement errors of vision but also result in numerically stable filters. The computational costs are reduced compared to a naive implementation of particle filter, and these algorithms run in real time. This system is validated through numerical simulation, image-in-the-loop simulation, and flight tests.
Sponsor
Date Issued
2018-08-23
Extent
Resource Type
Text
Resource Subtype
Dissertation
Rights Statement
Rights URI