Title:
Manipulating state space distributions for sample-efficient imitation-learning

Thumbnail Image
Author(s)
Schroecker, Yannick Karl Daniel
Authors
Advisor(s)
Isbell, Charles L.
Advisor(s)
Editor(s)
Associated Organization(s)
Organizational Unit
Organizational Unit
Series
Supplementary to
Abstract
Imitation learning has emerged as one of the most effective approaches to train agents to act intelligently in unstructured and unknown domains. On its own or in combination with reinforcement learning, it enables agents to copy the expert's behavior and to solve complex, long-term decision making problems. However, to utilize demonstrations effectively and learn from a finite amount of data, the agent needs to develop an understanding of the environment. This thesis investigates estimators of the state-distribution gradient as a means to influence which states the agent will see and thereby guide it to imitate the expert's behavior. Furthermore, this thesis will show that approaches which reason over future states in this way are able to learn from sparse signals and thus provide a way to effectively program agents. Specifically, this dissertation aims to validate the following thesis statement: Exploiting inherent structure in Markov chain stationary distributions allows learning agents to reason about likely future observations, and enables robust and efficient imitation learning, providing an effective and interactive way to teach agents from minimal demonstrations.
Sponsor
Date Issued
2020-03-16
Extent
Resource Type
Text
Resource Subtype
Dissertation
Rights Statement
Rights URI