Avast Team, Aug 27, 2020

Celeste Fralick is the Senior Principal Engineer and Chief Data Scientist for McAfee. She is responsible for McAfee’s technical analytic strategy that integrates into McAfee consumer and enterprise products as well as internal Business Intelligence. Previously, Celeste was Chief Data Scientist in Intel’s Internet of Things Group where she developed Machine Learning and Deep Learning analytics for over eight different markets.

Active in various industry and academic boards, journal editorial staffs, and consortiums, her experience spans everything from analytics and systems to product development. Celeste brings 40 years of industry experience to CyberSec&AI Connected, where she will join the panel ‘Tackling Bias in AI’ with Samir Jumar (M12) and Rajarshi Gupta (Avast). 

We caught up with her to discuss a range of topics around AI, including how big data has helped fight everything from cancer to COVID-19 and how race relations could be improved through better teaching of bias implications in schools. 

We are very much looking forward to welcoming you to CyberSec&AI Connected this October. Could you start by giving us an overview of the work you do as Senior Principal Engineer and Chief Data Scientist at McAfee? 

I lead a team of advanced analytic researchers that explore the state of the art technology in AI.  If the research is successful, we will translate the approach to production. I also enable all of McAfee to perform best known methods in analytics, ensuring that we are repeatable and reproducible in all the analytics we perform and embed, no matter the organization.

What areas in particular are exciting you at the moment in AI and cybersecurity? 

I am always excited about cortical algorithms (“neural networks on steroids”) as well as deep fake detection capabilities, particularly in a US election year. From an operational standpoint, I am very focused on AI reliability – ensuring that the model performs as expected over time. AI reliability should be a critical part of “MLOps” or “AIOps” throughout the entirety of the data pipeline. With each of these, security algorithms can excel in our products.

What topics and issues do you most look forward to discussing on the CyberSec&AI Connected panel “Tracking Bias in AI”? 

Besides ethics, I believe we should be focused on critical in-line and field monitors of bias in the data pipeline. That includes feedback for continual improvement. Bias can be embedded in the analytics, sampling, measurement, and, of course, prejudicial. We should be able to answer, “What do I need to monitor in my data pipeline to minimize bias?”.


Is it difficult for those in academics and industry to keep pace with developments in AI given the enormous potential of the field? 

Oh yes!  I also share the responsibility of our analytics marketing messages (e.g., media, blogs, etc), and it is difficult to be extremely detailed in research and, yet, be extremely broad and high-level at the marketing message level. We can’t assume the general population have PhDs in analytics.

What are the most dramatic consequences (both positive and negative) you see for society when it comes to AI, security, and privacy?

Inadvertent consequences – we don’t know what we don’t know – can have negative consequences, particularly in diverse populations, if bias isn’t taken into account (e.g., getting approved for a loan, incarceration time). AI has also brought negative consequences to security in that now adversaries can use AI as much as we use it to detect, protect, and correct against attacks – fortunately, we’re keeping one step ahead, even with COVID phishing. Privacy challenges abound with evolving government regulations and it is really up to each individual to take charge to ensure we are only sharing what is necessary. On a positive note,  AI has made sense out of Big Data – and the world is our oyster. Detection of small cancers, voice assistants, 24/7 connectivity, and even the swiftness in identifying the RNA and structure of COVID-19 – while AI comes with a price we need to be aware of, the positive impact definitely outweighs the negative, in my humble opinion!

The growing use of artificial intelligence in areas such as cybersecurity, social media, advertising, hiring, criminal justice, and healthcare, has stirred a debate about bias and fairness. Do you think AI decisions will be less biased than human ones? 

AI decisions will be as good and unbiased as those who create AI and its learning functions.  Monitoring of bias drift is critical in a deployed solution. We need to ensure that bias is a focus in data science curriculum. I would like to think that we could improve race relations if we were taught bias implications as early as K-12 levels in school.

What are some of the recent trends and developments around AI, ML, and privacy that have caught your eye? 

Virtual Reality (VR) may finally get its due. With a 42% compound annual growth rate (CAGR), VR can have a significant impact in medical analysis as well as professional training. Augmented Reality (AR), a $177B market value by 2022, was demonstrated early with Pokemon Go. Now the race for heads-up displays, social and immersive experiences (particularly with data) is on. With both VR and AR, consumers will need to pay close attention to what’s happening with their data – who’s collecting it, why, how, where, and when. Companies’ compliance and privacy officers have a lot to keep up with to avoid potential fines. 

Due to current world events, this year’s conference will be done a bit differently. CyberSec&AI will be going virtual, connecting attendees wherever they are in the world. What excites you about this format and the opportunities it brings?

At first, I was skeptical about virtual conferences back in March. After presenting and attending a few, I see that the format is much more informal and casual, allowing attendees to easily ask questions and engage with each other as well as the speaker.  I suspect that many conferences will permanently switch to this format allowing many who can’t travel to have an option – especially students!  Even a hybrid format will be welcomed once COVID (hopefully) subsides.

View the full agenda for CyberSec&AI Connected here. To join the panel discussion live, secure your place on our booking page

 

This article features

Celeste Fralick

Senior Principal Engineer and Chief Data Scientist

McAfee
McAfee

Latest news

3 reasons you need to be at CyberSec&AI ...

Partnerships and collaborations drive progress and technological advances. With travel restrictions ...

Podcast: Avast’s Michal Pechoucek on what e...

Michal is one of the chief architects behind CyberSec&AI Connected, which takes place online on ...

Bobby Filar, Lead Data Scientist at Elastic, ...

Bobby Filar is the Lead Data Scientist at Elastic where he employs machine learning and natural lang...

Professor Lorenzo Cavallaro on adversarial ma...

Lorenzo leads the Systems Security Research Lab where he specializes in the intersection of program ...