CyberSec&AI Connected 2021 | BLOG

Workshop preview: “We are still waiting for the security revolution to arrive”

By Feargus Pendlebury
Feargus Pendlebury (King’s College London) talks machine learning, malware classification, and the benefits of virtual conferences

Feargus Pendlebury will lead one of the technical workshops at CyberSec&AI Connected this October 8th. Feargus is a PhD student in cybersecurity with the Systems Security Research Lab at King’s College London and the Information Security Group at Royal Holloway, University of London. His research explores the limitations of machine learning when applied to security settings. 

Feargus is a core author and maintainer of TRANSCEND, a framework for detecting concept drift using conformal evaluation. During his workshop with CyberSec&AI Connected, he will present an extension of TRANSCEND and show how it can be applied to many popular algorithms. He will also discuss two additional conformal evaluators which surpass the original in terms of performance and computational cost. 

Ahead of his workshop, we caught up with Feargus to get some insight into his presentation as well as discuss what inspires him in his research, trends in machine learning, and what he is most looking forward to at this year’s CyberSec&AI Connected.

Your workshop session is entitled ‘Revisiting Concept Drift Detection in Malware Classification’. Could you give us a little insight into what attendees can expect from your session? 

Machine learning has been a powerful tool for solving many tasks, but in security it feels like we’re still waiting for the revolution to arrive. One aspect that makes security different from other settings is that what we’re trying to detect, adversarial behaviour, is constantly fighting back, shifting and evolving to try and evade detection. 

The evolution of malicious behaviour causes a phenomenon known as concept drift which results in performance degradation in our classifiers. One option to mitigate this is to identify and reject poor quality classifications caused by drifting examples. 

Here we revisit and extend TRANSCEND, a rejection strategy proposed a few years ago by our lab, which was built on a strong theoretical framework but was also extremely computationally intensive. I’ll walk through some of the improvements we’ve made and show how it’s now ready to be integrated into practical security pipelines, perfect for combating concept drift in the malware domain. 

Can you share with us what inspires you, or what is important to you in your research? 

People have a right to feel safe online and be free to use technology to better their lives. Unfortunately, the proliferation of profiteering and abuse that has accompanied the information age means that this isn’t always the case. Hopefully advances in research will eventually allow people to focus on what’s important to them without giving security and privacy a second thought! 

You were once involved in the Abusive Accounts Detection team at Facebook where you developed some novel techniques for tracking adversarial behaviour on its social media platforms. What did you encounter while working within a social media platform when it comes to cybersecurity?
The most startling thing is the scale at which such a social media platform operates. That also means that many of the security problems are unique and have never been faced by any company before them and novel problems call for innovative solutions. Luckily they have some incredible security teams who are definitely up to the challenge. 

This year’s event will examine critical issues around AI for privacy and security. What aspects of this theme are you most looking forward to discussing with other speakers and panelists at the conference? 

I’m interested in discussing how we can ensure that such technology is used for good when we often see it being misused or used to inflict harm. While my research to date is almost entirely technical, it’s important to recognise that some issues won’t have technical solutions, they’ll require regulation, cooperation, or social action to solve.

What are some of the latest trends and developments in your field? What hopes or worries do you have for the future based on them? 

It’s difficult to say as things move so quickly and are often unpredictable! I think one exciting area that we’re making progress in is explainability, which makes machine learning a much more valuable tool for security. As for my worries—well again it’s hard to predict, but I’ll try to stay optimistic! 

Are there any speakers or sessions in particular at CyberSec&AI Connected you are looking forward to watching (see the full list here: https://cybersecai.com/speakers/)? 

I’m a big fan of the work that Carmela Troncoso’s lab has produced, as well as Hyrum Anderson’s group, so I’m really looking forward to seeing their sessions. I actually first met Luca Demetrio at the inaugural CyberSec&AI conference where he presented a great poster, so I’m sure he’ll deliver a fantastic talk this year as well. Additionally, having seen Hany Farid’s invited talk at Usenix last year I expect his session will be really insightful, especially given how pertinent the topic is to the world today. 

Due to current world events, this year’s conference will be done a bit differently. CyberSec&AI will be going virtual, connecting attendees wherever they are in the world. What excites you about this format and the opportunities it brings?

While it will be sad not to see everyone in person, I’ve found virtual events tend to be a bit more informal which can help create a more open and creative environment which is great for brainstorming new ideas. 

Do you see the way people collaborate and partner together changing in a post-Covid world?

I think the work will be largely the same as before. Perhaps a reduction in travel will benefit people with busier lives (and the environment) but overall it will be a lot less fun if you can’t grab pizza with your collaborators from time to time…

To watch Feargus Pendlebury’s presentation live, or view it afterwards via our Virtual Library, secure your place at CyberSec&AI Connected. Visit our booking page for Cybersecurity month’s special buy one, get one offer secure your virtual seat, and get another for free.

Share this article

Share on linkedin
Share on facebook
Share on twitter

And follow #CyberSecAI

Feargus Pendlebury
PhD Researcher
King’s College London

is featured in this article

Latest news

Call for Speakers

Share your knowledge and insight with our growing global community of cybersecurity experts and professionals. We are looking for presenters who want to share their work and latest research in the fields of AI, machine learning, digital privacy, and cybersecurity to their peers around the world. Abstracts must be submitted by July 31st, 2021.

We value your privacy

By clicking “ACCEPT” you allow cookies that help us analyze the performance and usage of this website. See our Cookies Policy for more details. 

script>