Session

Panel: Tackling Bias in AI

Artificial Intelligence has seen immense advancements in the 21st century, and its impact has been described as “the next electricity”. Yet, even the most superlative AI algorithms are only as good as the underlying training data – and this leads to bias in the algorithmic decisions. In this panel, our experts will discuss how deep and challenging are the actual biases in AI algorithms. Can we trust the algorithms deployed today (in medicine, finance, security, legal, HR) to accurately model and protect the interests of everyone. And if not, what techniques can be used to quantify and then eliminate those biases.

Biography

Professor Sandra Wachter is an Associate Professor and Senior Research Fellow in Law and Ethics of AI, Big Data, and robotics as well as Internet Regulation at the Oxford Internet Institute at the University of Oxford. Wachter is specialising in technology-, IP-, data protection and non-discrimination law as well as European-, International-, (online) human rights,- and medical law. Her current research focuses on the legal and ethical implications of AI, Big Data, and robotics as well as profiling, inferential analytics, explainable AI, algorithmic bias, diversity, and fairness, governmental surveillance, predictive policing, and human rights online.

Latest news

The positive development of a privacy preserv...

Since the beginning of the year, we have watched COVID-19 cases fluctuate around the world, turning ...

How AI can learn to win the cybersecurity “game”

Cybersecurity is an incredibly serious business. So thinking of it as a game may seem a little odd. ...

Global audience makes first virtual CyberSec&...

The first virtual edition of CyberSec&AI Connected took place on October 8th. Building on 2019...

CyberSec&AI Connected is here!

After a year of planning and preparation, CyberSec&AI Connected has arrived. Today sees delegate...