Communication management in distributed sensor interpretation
Distributed Sensor Interpretation (DSI) problems have been the subject of considerable research within the cooperative MAS community. In a DSI system, data is collected from different sensors and must be integrated to produce the best interpretation. Distributed approaches to DSI emphasize not only distributed collecting of data, but also distributed processing of data. However, in virtually all real-world DSI systems the agents must exchange data, local results, and/or other information to develop a solution with acceptable quality. Unless communication among the agents is appropriately limited, the cost of communication may negate much of the benefit of distributed processing. Unfortunately, the state-of-the-art in MAS is such that there are not yet formal design methods that allow one to evaluate a potential DSI domain and determine the optimal coordination strategy. I believe that this is a serious issue that will hinder the deployment of many important applications of sensor networks.
My work is one of the first attempts to address this issue. I formalized the communication problem in DSI with a Distributed Bayesian Network and solved the question of what to communicate with a decentralized Markov Decision Process (DEC-MDP). With this model, one is able to generate a communication strategy for a given DSI problem such that only minimum communication cost is needed to achieve a required confidence level in the interpretation task.
Though general communication can be naturally modeled with a DEC-MDP, techniques need to be developed to address the complexity issue before the system can be scaled up. I approach this problem from two perspectives. First, I proposed an algorithm to automatically generate a set of abstract communication actions such that when the abstract information is transferred between the agents, the goal of the system is more likely to be reached. By allowing only abstract communication actions in certain states, both the expected communication cost required and the time needed to solve the DEC-MDP are reduced. Second, I established that the interaction present among the agents is the cause of the high complexity of a DEC-MDP. This understanding is crucial to identifying new and more tractable models as well as developing appropriate approximations to otherwise intractable problems. I proved that deciding a distributed MDP whose interaction history contains information of a size polynomial in the number of states is NP-complete, and that deciding a non-polynomially encodable distributed MDP is harder than NP. This is the first time that a well defined condition has been identified that can distinguish between multi-agent problems in NP and those that are strictly harder than NP. It is an important step in mapping out the complexity hierarchy of multi-agent systems. The significance of this theoretical result also has a more practical side. Most multi-agent systems are provably harder than NP and solving them optimally is not possible. This work provides theoretical guidance in understanding how the approximations in a model limit the search space and reduce the complexity.