45
0

Using reinforcement learning to autonomously identify the source of errors for agents in a group mission

Abstract

When agents are swarmed to execute a mission, there is often a sudden failure of some of the agents observed from the command base. Solely relying on the communication between the command base and the concerning agent, it is generally difficult to determine whether the failure is caused by actuators (hypothesis, hah_a) or sensors (hypothesis, hsh_s) However, by instigating collusion between the agents, we can pinpoint the cause of the failure, that is, for hah_a, we expect to detect corresponding displacements while for hah_a we do not. We recommend that such swarm strategies applied to grasp the situation be autonomously generated by artificial intelligence (AI). Preferable actions (e.g.,{e.g.,} the collision) for the distinction will be those maximizing the difference between the expected behaviors for each hypothesis, as a value function. Such actions exist, however, only very sparsely in the whole possibilities, for which the conventional search based on gradient methods does not make sense. To mitigate the abovementioned shortcoming, we (successfully) applied the reinforcement learning technique, achieving the maximization of such a sparse value function. Machine learning was concluded autonomously.The colliding action is the basis of distinguishing the hypothesizes. To pinpoint the agent with the actuator error via this action, the agents behave as if they are assisting the malfunctioning one to achieve a given mission.

View on arXiv
Comments on this paper