Cybersecurity is an incredibly serious business. So thinking of it as a game may seem a little odd. But Viliam Lisy, a Senior Research Scientist at Avast and an Assistant Professor at Czech Technical University in Prague (CTU), argues that a lot can be learnt from how AI programs learned to beat the best human players in the world at various games and challenges.
Lisy revealed his theory as part of the main CyberSec&AI Connected conference track with his talk “Playing Poker with Cybercriminals”.
Gaming the system
Lisy began his presentation by highlighting how AI has achieved “superhuman” performance in a wide range of games from chess to video games to poker. Indeed, Lisy himself has been involved in AI projects that have seen champion poker players beaten by AI programs.
According to Lisy, in most examples where machines have triumphed over humans, they learnt to beat human players not by studying the way humans play, but by ignoring “explicit human knowledge”. Instead, they learned to win from playing from thousands, even millions, of raw simulations (in other words, the AI learnt by playing itself).
Lisy showed how in many critical ways, cybersecurity is like a “game” and thinking of it like that can help create solutions to successfully fight ever changing cyberattacks.
“A game is a situation where the outcomes of actions critically depend on the actions performed by the other players,” said Lisy. “Cybersecurity is really a game, based on this definition.”
Lisy offered the example of how a hacker is trying to exploit a system based on how the system administrator has patched their own network, thereby fitting the definition of what a game is.
Lisy, however, acknowledged the comparison was not perfect, noting for instance that AI excels when there is a ‘game model’ to help with sample complexity. AI performs best when it is able to experience each substantially different situation that can occur in a game many times. Furthemore, the “frequency of visiting” these situations matters a lot. This is why a computer can learn to play a game with set situations and outcomes — such as rock, scissors, paper — relatively quickly.
Lisy’s talk examined how this approach can help to create effective AI models to fight something as complex as cybercrime by helping to create ‘game models’ from which AI can learn from. He offered three key examples of model types that could be used:
- Hand-designed models: These are the most simple models to use. Examples include studying the number of failed attempts to crack a password and analysing the cost/reward ratio for an attacker and defender to help create an optimal strategy for fighting the cybercriminals.
- Model generated from existing knowledge bases: This technique studies “all” known ways to attack a system from sources such as using vulnerability databases, intelligence feeds and MITRE ATT&CK to help shape and build a game model for AI to learn from.
- Data-driven modelling: Since explicit knowledge bases require a high number of people and resources to run and maintain, the last option is automated extraction of the game model from raw data, such as malware databases, where new threats and strategies of malware authors are continually logged.
In the data driven modelling Lisy favoured, AI can learn the “rules of the game” from the amount of data available. In this model, human understanding is not necessary for optimizing our defense against cyberattacks.
The full presentation, as well as the subsequent live Q&A Lisy took part in, will be available later in the year on our YouTube channel. However, registered CyberSec&AI Connected delegates can visit our Virtual Library.