Content area
Full Text
Traditional cyber security techniques have led to an asymmetric disadvantage for defenders. The defender must detect all possible threats at all times from all attackers and defend all systems against all possible exploitation. In contrast, an attacker needs only to find a single path to the defender's critical information. In this article, we discuss how this asymmetry can be rebalanced using cyber deception to change the attacker's perception of the network environment, and lead attackers to false beliefs about which systems contain critical information or are critical to a defender's computing infrastructure. We introduce game theory concepts and models to represent and reason over the use of cyber deception by the defender and the effect it has on attackerperception. Finally, we discuss techniques for combining artificial intelligence algorithms with game theory models to estimate hidden states of the attacker using feedback through payoffs to learn how best to defend the system using cyber deception. It is our opinion that adaptive cyber deception is a necessary component of future information systems and networks. The techniques we present can simultaneously decrease the risks and impacts suffered by defenders and dramatically increase the costs and risks of detection for attackers. Such techniques are likely to play a pivotal role in defending national and international security concerns.
There have been many recent advances in artificial intelligence and machine learning that have addressed the speed and accuracy of detecting malicious activity so as to better defend networks. Many of these solutions are able to take predetermined actions against detected activities in order to bolster defense and to prevent system compromises - such as dynamically reconfiguring a firewall rule to block attempted denial of service attacks. Today's solutions however, generally stop short of either directly interfering with malicious activity or gracefully responding to more subtle indicators of malicious intent. The current risk is that false positive rates can be quite high and responding to these would cause more harm than good. Often, it is still a human operator who weeds through the alerts generated by an ML-based detector, determines which are true and which are false positives, and then coordinates with a cyber defender to respond to the suspicious activity. The problem is further exacerbated by the limited set of...