Learning hierarchical teaching policies for cooperative agents
Author(s)
How, Jonathan P.
DownloadAccepted version (1.107Mb)
Open Access Policy
Open Access Policy
Creative Commons Attribution-Noncommercial-Share Alike
Terms of use
Metadata
Show full item recordAbstract
© 2020 International Foundation for Autonomous. Collective learning can be greatly enhanced when agents effectively exchange knowledge with their peers. In particular, recent work studying agents that learn to teach other teammates has demonstrated that action advising accelerates team-wide learning. However, the prior work has simplified the learning of advising policies by using simple function approximations and only considered advising with primitive (low-level) actions, limiting the scalability of learning and teaching to complex domains. This paper introduces a novel learning-to-teach framework, called hierarchical multiagent teaching (HMAT), that improves scalability to complex environments by using the deep representation for student policies and by advising with more expressive extended action sequences over multiple levels of temporal abstraction. Our empirical evaluations demonstrate that HMAT improves team-wide learning progress in large, complex domains where previous approaches fail. HMAT also learns teaching policies that can effectively transfer knowledge to different teammates with knowledge of different tasks, even when the teammates have heterogeneous action spaces.
Date issued
2020Department
MIT-IBM Watson AI Lab; Massachusetts Institute of Technology. Laboratory for Information and Decision SystemsJournal
Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS
Citation
How, Jonathan P. 2020. "Learning hierarchical teaching policies for cooperative agents." Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS, 2020-May.
Version: Author's final manuscript