355

Learning Hierarchical Teaching in Cooperative Multiagent Reinforcement Learning

Abstract

In cooperative multiagent reinforcement learning, agents commonly acquire heterogeneous knowledge. Learning across the team can be greatly improved if agents can effectively exchange their knowledge to other agents. In particular, recent work showed that action advising, a form of peer-to-peer knowledge transfer from teacher agents to student agents, improves team-wide learning. However, that prior work on action advising only considered advising with primitive (low-level) actions, which limits scalability. This paper introduces a novel learning-to-teach framework, called hierarchical multiagent teaching (HMAT), in which the teacher advice may include extended action sequences over multiple levels of temporal abstraction. The empirical evaluations show that HMAT accelerates team-wide learning progress in environments that are more complex than considered in previous learning-to-teach research. HMAT is also shown to learn teaching policies that can be transferred to different teammates/tasks, even when teammates have heterogeneous action spaces.

View on arXiv
Comments on this paper