Salta al contenuto principale
Passa alla visualizzazione normale.

DARIO BAUSO

Distributed n-player approachability via time and space average consensus

Abstract

In this paper we consider repeated coalitional games with transferable utilities (TU) over networks. Namely, we consider a set of n players that have to distribute among themselves a vector of rewards (one for each player). In our network version there is no coordinator allocating the rewards, but the agents have to agree on a common time-averaged vector by updating the local estimates of the reward vector. The common time-averaged reward vector has to approach a suitable constraint set, called core of the game, that guarantees that no agents benefit from quitting the grand coalition. We propose a doubly (over time and space) averaging distributed algorithm. At every iteration, each agent first computes a weighted average of its own time-averaged estimate and those of his neighbors and then generates a new reward vector in order to drive the time-averaged estimate towards a pre-assigned set. The main contribution of the paper is to prove that under certain assumptions, i) all agents' estimates reach consensus on the true time-averaged reward vector, and ii) the estimates (and thus the true time-averaged reward vector) approach the pre-assigned set. Conditions for this to happen are related to the connectivity over time of the communication topology and to the approachability principle.