Fifteen years ago, in the middle of a great enthusiasm for pattern classifier technology, with new methods proposed every year, Prof. David Hand raised the question if the progress of new methods could be something of an illusion. I try to raise a similar question for machine learning for cybersecurity. Today deep learning algorithms perform incredibly well for specialized tasks, sometimes exhibiting super-human performances. But there is an elephant in the room, called “adversarial examples”, reminding us that machine learning models are based on the “independent and identically distributed” assumption, which supposes, roughly speaking, that future data will resemble past data. Therefore, when learning systems are deployed in the open world or in adversarial environments, they often misclassify (with high confidence) inputs that are largely different from known training data. In this talk, I briefly summarize this state of affairs and give my viewpoint on the real progress that we are experiencing and the risk of illusions.
Avast Research Lab
The Avast Research Lab runs innovative projects in all areas of digital safety, from advanced threat detection, to delivering better privacy and identity protection, and much more. We employ state-of-the-art AI solutions to counter the ever accelerating growth of emergent threats through a combination of in-house expertise, academic cooperation, and publicly available research.