Fifteen years ago, in the middle of a great enthusiasm for pattern classifier technology, with new methods proposed every year, Prof. David Hand raised the question if the progress of new methods could be something of an illusion. I try to raise a similar question for machine learning for cybersecurity. Today deep learning algorithms perform incredibly well for specialized tasks, sometimes exhibiting super-human performances. But there is an elephant in the room, called “adversarial examples”, reminding us that machine learning models are based on the “independent and identically distributed” assumption, which supposes, roughly speaking, that future data will resemble past data. Therefore, when learning systems are deployed in the open world or in adversarial environments, they often misclassify (with high confidence) inputs that are largely different from known training data. In this talk, I briefly summarize this state of affairs and give my viewpoint on the real progress that we are experiencing and the risk of illusions.
Join us in November 2021 and register now for online CyberSec&AI Connected 2021