In our monthly round up of news relating to AI, privacy, cybersecurity, and machine learning, we look at concerns around privacy for students studying remotely, new research into machine learning leaks, and the increased use of facial recognition software by governments and law enforcement.
Where to find machine learning leaks
Two presenters from 2019’s edition of CyberSec&AI Connected have been in the news recently concerning their research around machine learning. Stratosphere Lab published an article from Maria Rigaki and Sebastian Garcia of Czech Technical University in Prague (co-organizers of CyberSec&AI Connected).
The piece reviews the most recent techniques that cause machine learning models to leak private data, as well as examine the most important attacks in this field and why these attacks are possible. In the blog, Rigaki and Garcia draw on their comprehensive survey of privacy attacks in machine learning and include a github repository of all the papers relating to this area along with the accompanying code.
You can read the article here.
Why universities need to be careful around privacy with exam software
MIT Technology Review has published a piece debating the dangers posed by exam proctoring tools. With the COVID-19 pandemic forcing universities and other learning institutions to increase their remote learning and exam options for students, there has been an accompanying boost in demand for software designed to prevent cheating.
However, according to Shea Swauger from the University of Colorado Denver, there are growing concerns that “algorithmic proctoring is a modern surveillance technology that reinforces white supremacy, sexism, ableism, and transphobia. The use of these tools is an invasion of students’ privacy and, often, a civil rights violation”.
Swauger also expresses grave concerns over the use of machine learning, biometrics, AI, and even facial recognition and detection tools, and the potential way these tools may discriminate against students. You can read the full piece here.
Controversial facial recognition firm signs major US government contract
Clearview AI, the US technology company that offers facial recognition software for use by law enforcement agencies, private companies, and educational institutions, has signed a contract with the US Department of Homeland Security (DHS). The arrangement will give Immigration and Customs Enforcement (ICE) the ability to access Clearview AI’s technology. ZDNet notes that “Combining facial recognition searches with ICE, a DHS department already surrounded by controversy due to its detention centers, practices concerning child containment, and now 17 detainee deaths this year, could be an explosive combination”.
The ethics of facial recognition software and its use is an increasingly hot topic of debate. The article also notes that firms like Amazon, IBM, and Microsoft have announced they will cease selling facial recognition tools to law enforcement agencies due to ethical and privacy concerns. Read more here.
Be part of the debate around AI, privacy, and cybersecurity
If you want to see some of the latest research and thinking around AI, privacy, machine learning, and cybersecurity, make sure to join us virtually at CyberSec&AI Connected. Taking place on October 8th, 2020, the conference brings together leading academics and tech professionals from around the world to examine critical issues around AI for privacy and security.
Partnerships and collaborations drive progress and technological advances. With travel restrictions ...
Michal is one of the chief architects behind CyberSec&AI Connected, which takes place online on ...
Bobby Filar is the Lead Data Scientist at Elastic where he employs machine learning and natural lang...