Full Text

Turn on search term navigation

© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

We develop a new theory of knowledge with mathematics and a broad-based series of case studies to seek a better understanding of what constitutes knowledge in the field and its value for autonomous human–machine teams facing uncertainty in the open. Like humans, as teammates, artificial intelligence (AI) machines must be able to determine what constitutes the usable knowledge that contributes to a team’s success when facing uncertainty in the field (e.g., testing “knowledge” in the field with debate; identifying new knowledge; using knowledge to innovate), its failure (e.g., troubleshooting; identifying weaknesses; discovering vulnerabilities; exploitation using deception), and feeding the results back to users and society. It matters not whether a debate is public, private, or unexpressed by an individual human or machine agent acting alone; regardless, in this exploration, we speculate that only a transparent process advances the science of autonomous human–machine teams, assists in interpretable machine learning, and allows a free people and their machines to co-evolve. The complexity of the team is taken into consideration in our search for knowledge, which can also be used as an information metric. We conclude that the structure of “knowledge”, once found, is resistant to alternatives (i.e., it is ordered); that its functional utility is generalizable; and that its useful applications are multifaceted (akin to maximum entropy production). Our novel finding is the existence of Shannon holes that are gaps in knowledge, a surprising “discovery” to only find Shannon there first.

Details

Title
Shannon Holes, Black Holes, and Knowledge: The Essential Tension for Autonomous Human–Machine Teams Facing Uncertainty
Author
Lawless, William 1   VIAFID ORCID Logo  ; Moskowitz, Ira S 2 

 Department of Mathematics and Psychology, Paine College, Augusta, GA 30901, USA 
 Naval Research Laboratory, Information Technology Division-5580, Washington, DC 20375, USA; ira.s.moskowitz.civ@us.navy.mil 
First page
331
Publication year
2024
Publication date
2024
Publisher
MDPI AG
ISSN
26739585
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
3110558140
Copyright
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.