Avast Team, Aug 13, 2020

“The public, our governments, and the titans of the technology sector do not seem to be equipped to contend with the global disinformation pandemic,” says Hany, a professor at the University of California, Berkeley. Hany holds a joint appointment in Electrical Engineering & Computer Sciences and the School of Information. 

Farid’s research focuses on digital forensics, image analysis, and human perception. We caught up with him to preview his talk at this year’s CyberSec&AI Connected and discuss the impact deep fakes are having on the world we live in. 

Could you give us a little insight into your talk “Creating, Weaponizing, and Detecting Deep Fakes”? What kind of things can delegates expect to learn or discover? 

The past few years have seen a startling and troubling rise in the fake-news phenomena in which everyone from individuals to nation-sponsored entities can produce and distribute misinformation. The implications of fake news range from a misinformed public to an existential threat to democracy, and horrific violence. At the same time, recent and rapid advances in machine learning are making it easier than ever to create sophisticated and compelling fake images, videos, and audio recordings, making the fake-news phenomena even more powerful and dangerous. I will provide an overview of the creation of these so-called deep-fakes, and I will describe emerging techniques for detecting them. 

How fast moving is the tech and science behind deep fakes? Is it difficult for those in academia and industry to keep pace with developments and potential of the field? 

Deep fakes splashed onto the scene a few years ago. Since then, the technology to create deep fakes is developing rapidly as is the sophistication with which audio, images, and videos can be manipulated. We are struggling to keep up with the pace at which this technology is developing, as well as understanding and mitigating the potential risks.

Can you give us any real world examples of opportunities, challenges or problems that deep fakes have already caused?

There are many potential threats of deep fakes from disinformation to fraud and election interference. Currently the most common misuse of this technology is in the creation of nonconsensual pornography which disproportionally impacts women. The other most significant threat is the rise of the so-called liar’s dividend in which anyone can claim that any image or video is fake, casting doubt on any inconvenient facts.

How knowledgeable are the general public, governments and businesses about deep fake areas? How can more be done to increase awareness and understanding around them? 

The media has done a fairly good job of covering deep fakes and their threats. Our legislators, however, are still struggling to contend with how/if to regulate this new form of digital trickery. Social media platforms are also struggling to contend with how to contend with the use of deep fakes in disinformation campaigns.

Are you optimistic about how we can combat deep fakes as the tech becomes ever more sophisticated?

I am not optimistic about the broader problem of online disinformation, of which deep fakes are one part. The past few years has seen an explosion of disinformation online with the impact of small to large-scale fraud, sowing civil unrest, disrupting democratic elections, and destabilizing societies. The public, our governments, and the titans of the technology sector do not seem to be equipped to contend with the global disinformation pandemic.

As you know, this year’s event will examine critical issues around AI for privacy and security. What aspects of this theme are you looking forward to discussing with other speakers and panelists at the conference?

I think that it is important that we talk about how technology is and can be misused and weaponized before it spins out of our control. For too long, we have developed and deployed technology without always thinking about the consequences. 

What are some of the recent trends and developments around AI and privacy that have caught your eye?

If I’ve learned anything over the past few decades, it is to not try to predict the future of technological innovations. 

If you want to join Hany Farid at CyberSec&AI Connected, secure your place and visit our booking page to take advantage of our Summer Rate or 3 for 2 access offer.

This article features

Hany Farid


EECS & I School, UC Berkeley

Latest news

The positive development of a privacy preserv...

Since the beginning of the year, we have watched COVID-19 cases fluctuate around the world, turning ...

How AI can learn to win the cybersecurity “game”

Cybersecurity is an incredibly serious business. So thinking of it as a game may seem a little odd. ...

Global audience makes first virtual CyberSec&...

The first virtual edition of CyberSec&AI Connected took place on October 8th. Building on 2019...

CyberSec&AI Connected is here!

After a year of planning and preparation, CyberSec&AI Connected has arrived. Today sees delegate...