Avast Team, Jun 25, 2020

Garry Kasparov is widely acclaimed as one of the greatest chess players of all time. In 1985, aged just 22, he became the youngest chess world champion in history. In 1996 and 1997, his matches against Deep Blue, the IBM super-computer, brought the potential of AI to the world. 

In recent years Garry has become a hugely respected commentator and author on AI, human rights, cybersecurity, and politics, making him an ideal speaker at CyberSec&AI Connected with its core themes of AI for privacy and security. We sat down with Garry, who is a Security Ambassador for Avast, to talk about his appearance at the conference as well as his views on AI, privacy and security. 

What are you looking forward to discussing in your fireside chat at CyberSec&AI Connected?

I’m always most interested in where cybersecurity and AI come together with trends in society—privacy and rights, of the roles public agencies and private companies play, and, of course, where the individual is in the middle of it all. And it would be wrong to completely avoid discussing the unique moment we’re all living through today. I know we all need a mental escape from the crisis, but the pandemic is also shaping our professional world, our technological world. It is affecting how we deal with information and accelerating what we need and expect from our intelligent machines.

HEAR GARRY AT CYBERSEC&AI CONNECTED

Your battles with Deep Blue did more to bring AI into the public consciousness than any other event. How much did Deep Blue help people grasp the potential of AI and how much did it perhaps lead to people misunderstanding what AI is?

It did a lot of both, in hindsight. Every watershed moment in technology creates a lot of hype, misconceptions, and then, over time, it becomes mythology. As the saying goes, we call something AI until we find a way to make it work, then we call it something else. This is why arguing about if Deep Blue was AI or not is irrelevant when talking about public perception and results. Deep Blue won, that’s what mattered. It played world champion level chess, which was its only purpose. It was a shock, not just to me, that suddenly forced the world to confront a future with machines competing for intellectual work, not just physical labor and routine tasks.

Computer experts and philosophers can debate method and output endlessly. But as an advocate and observer, not an expert or philosopher, what matters to me in the end is what it can do, not how it does it—even when that is fascinating! That is, how our intelligent machines help us, how they advance our understanding of the world, make us more productive, safer, healthier, everything our tech has always done.

You have been vocal about the advantages AI can offer society. Why do you think there is so much fear and confusion around AI? 

Partly because we fear anything powerful and new, any changes we cannot predict. Nuclear power, the internet, AI, they are too big to comprehend, and so that leads to some instinctive fear. Next, our society has become wealthy and risk-averse, so anything that looks like it might upset the status quo might be bad. Will it replace too many jobs? Will it replace ME? And the news always loves a scary headline. It’s a shame, because AI is a huge opportunity for growth in practically every dimension, but if we don’t press forward ambitiously, the negative aspects like job losses will only gain, without the benefits growing fast enough.

Lastly, there’s the impact of decades of sci-fi about killer robots, super-intelligent machine dystopias, etc. This is part of a cultural trend toward tech-phobia that coincided with the environmental movement in the 1970s, although the anti-nuclear sentiment had a role as well. Instead of being amazed by incredible new tech, like robots and AI, we immediately turn to wondering how they might harm us, which is ridiculous.

You’ve said previously that progress in areas like AI cannot be stopped and that if you restrict growth in AI in Europe and America, another region will simply move ahead. However, you’ve also acknowledged that companies that generate vast amounts of private data such as Facebook and Google do need more public control. How can that be brought about?

I don’t like the word “control,” because transparency and pressure from the public tend to lead to regulations, not takeovers. These giant companies have unprecedented access to the lives, the data, of billions of people. Relying on profit motive, shareholder interest, and media investigations alone to navigate this would be irresponsible. I’m no fan of government interference or heavy regulations that might stifle innovation, but there is massive public interest here, so public oversight is required.

This isn’t a matter of uninformed lawmakers telling Silicon Valley how to do things. The pressure should be toward transparency and accountability, and bigger issues about the right to privacy and data control.

Will AI strengthen or erode human rights? And how would you like to see AI be utilised to improve or secure them?

I always say that tech is agnostic, it’s a tool. Was the invention of the hammer good or bad for human rights? But tools are good for human advancement, and that, eventually, is good for human rights. But the present is always what’s on everyone’s mind. Regarding human rights, people tend to think of the potential for AI to help dictators surveille their subjects, as privacy nightmares. Or how algorithms can build prejudice into systems. They want to protect human rights from AI. That’s the bad news.

But AI can also help us root out inequality, discrimination, when designed and directed well. AI can help track refugee populations, analyze bomb patterns to attribute war crimes, and keep an eye on the bad guys just as they try to keep an eye on everyone else. Again, it’s a tool, and there’s no way to make a tool so that it only does good in the world, as nice as that sounds.

How can we educate people, especially the younger generation about AI? After all, AI is going to reshape their world and the work they do in it? Is there enough public debate on the topic?

The young people are the ones who are going to reshape the world with AI, because they’ll grow up with it, not see it as an alien or a science project. Compare it to how Millennials grew up with the internet and “speak” it as a native language, compared to the previous generation that still talks about it like a monolithic thing. It’s older people who need a crash course!

That said, we could use better education of the media in what AI is and what it isn’t, and a more sober, practical discussion without all the fear-mongering and sci-fi. People made it, people use it, and people are accountable for it. I do think we need more public discussion about how AI is being used. Autonomous weapons? Algorithms that decide who gets a job, or loan? Again, always transparency.

The COVID-19 pandemic has accelerated the need for AI and data based solutions in the health sector. Do you have any thoughts on how governments, tech companies, and medical organisations are using AI to combat the virus? How concerned are you that some regimes or organizations might be taking advantage of the situation for their own ends.

While the creation of new medicines, including vaccines, can be made more efficient by faster and smarter computing power, AI will really shine in tracking its effects. Its ability to rapidly crunch data, to find patterns and correlations, means we can test faster and more safely and be more confident of the results.

Ironically, one aspect is that so many of our habits are changing in response to the pandemic, it’s confusing the algorithms we need to help us fight it! We design models based on our behavior, and AI algorithms are generated based on data generated by that behavior. Suddenly, much of that behavior changed, making a lot of that data obsolete, and with it, the algorithms. So trying to use the usual analytics can produce incorrect results.

A crisis brings out the best in humanity and also the worst. You might say it reveals true character. So of course criminals and criminal regimes will exploit the moment. And not just bad guys, but companies that want to expand their data collection or agencies expand their reach.

Your fellow keynote speaker at CyberSec&AI Connected will be co-founder of Tor, Roger Dingledine. What would you like to ask him?

I’d like to ask about how he sees Tor and other privacy technology fitting into the problem of accountability online. In human rights advocacy, anonymity can be a life-saving necessity. There’s a reason dictatorships don’t want anyone to ever be anonymous, online or off. But anonymity also contributes to everything from cybercrime to online harassment and other types of abuse. Balancing responsibility, holding people accountable, and providing necessary protections is a difficult task. Is total anonymity, like unbreakable encryption, simply both a boon and a curse, forever? Or can we have it all?

SECURE YOUR PLACE AT CYBERSEC&AI CONNECTED TO HEAR GARRY SPEAK HERE.

This article features

Garry Kasparov

Chess Grandmaster, Avast Security Ambassador

Latest news

Tor Project’s Roger Dingledine discusses an...

A leading researcher in the field of online anonymity, Roger was recognized as one of the ‘Top 100...

Rajarshi Gupta, Avast’s Head of AI, looks a...

2019’s CyberSec&AI conference had a fantastic start when it was named as 'Event of the Year' a...

2019 event highlight: A look back at Sadia Af...

Dr. Sadia Afroz is a Research Scientist at the International Computer Science Institute and a Senior...

‘Playing Poker with Cybercriminals’ — A...

Viliam Lisy is a senior research scientist in the Technology and Innovation Office at Avast. He also...