It appears you don't have support to open PDFs in this web browser. To view this file, Open with your PDF reader
Abstract
Agent-based modeling is a paradigm for modeling dynamic systems of interacting agents that are individually governed by specified behavioral rules. Training a model of such agents to produce an emergent behavior by specification of the emergent (as opposed to agent) behavior is easier from a demonstration perspective. While many approaches involve manual behavior specification via code or reliance on a defined taxonomy of possible behaviors, the AMF framework (existing work) generates mapping functions between agent-level parameters and swarm-level parameters which are re-usable once generated. This work builds on that framework by exploring sources of variance in performance, composition of framework output, and the integration of demonstration using images. The demonstrator specifies spatial motion of the agents over time, and retrieves agent-level parameters required to execute that motion. The framework, at its core, uses computationally cheap image processing algorithms. This makes it suitable for time-critical applications.
The proposed framework (AMF+) seeks to provide a general solution to the problem of allowing abstract demonstrations to be replicated by agents in a swarm. On solving this problem, the framework has potential usage in a variety of applications such as games, education, surveillance, and search-and-rescue, where the swarm may be controlled remotely. The availability of this software for academic research is therefore also a contribution to the scientific community. The abstraction of demonstration also removes technical requirements for the user. The framework may be used with varied input methodologies, allowing for usage by a wide audience spanning varied demonstration preferences and capabilities.
The framework is analyzed in detail for its current and potential capabilities. Our work is tested with a combination of primitive visual feature extraction methods (contour area and shape) and features generated using a pre-trained deep neural network in different stages of image featurization. The framework is also evaluated for its potential using complex visual features for all image featurization stages. Experimental results show significant coherence between demonstrated behavior and predicted behavior based on estimated agent-level parameters specific to the spatial arrangement of agents. The framework is also evaluated using agent-based models or similar systems for comparison.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer