Getting Passive Aggressive About False Positives
False positives (FPs) have been an issue of extreme importance for anti-virus (AV) systems for decades. As security vendors turn to ML, alert deluge has hit critical mass. The primary reason is that vendors build one global system to satisfy all customers, but have no method to adjust to individual local environments. Once deployed the idiosyncrasies of local environments expose blind spots that lead to FPs.
The industry has tried to combat these problems with inefficient allowlisting techniques and excessive model retraining. We propose using passive aggressive learning to alter a malware detection model to an individual’s environment, eliminating FPs without sharing any customer sensitive information. By using active learning we can eliminate a collection of notoriously difficult FPs from an environment without compromising the model’s accuracy, reducing the total amount of FP by an average of 23x.
Bobby Filar is a Lead Security Data Scientist at Elastic where he employs machine learning and natural language processing to drive cutting-edge detection and contextual understanding capabilities in the Elastic Security platform. In the past year, he has focused on applying machine learning against process event data to provide confidence and explainability metrics for malware alerts. Previously, Bobby has worked on a variety of machine learning problems focused on natural language understanding, geospatial analysis, and adversarial tasks in the information security domain.
Partnerships and collaborations drive progress and technological advances. With travel restrictions ...
Michal is one of the chief architects behind CyberSec&AI Connected, which takes place online on ...
Bobby Filar is the Lead Data Scientist at Elastic where he employs machine learning and natural lang...