Kiel.AI Meetup on Security in Machine Learning: Undesired Behavior and Weaknesses

Professor Esfandiar Mohammadi from the University of Lübeck will discuss with us different security issues that you might not even be aware of!

Kiel.AI Meetup on Security in Machine Learning: Undesired Behavior and Weaknesses

On June 16, at 6 pm, we are very happy to welcome Professor Esfandiar Mohammadi from the University of Lübeck for our next Kiel.AI Meetup with a talk on security in the field of machine learning.

Machine learning techniques are versatile and applicable to many scenarios. Yet, their flexibility can lead to undesired side-effects and vulnerabilities. Esfandiar Mohammadi will discuss how a model can contain undesired footprints of potentially sensitive training data, how various federated learning approaches are even more vulnerable to leaking information about the training data, and how malicious actors in federated learning settings can inject malicious backdoors into a jointly trained model.

About our guest speaker:
Esfandiar Mohammadi is a tenured professor at University of Lübeck since 2019 with a research focus on IT security & data privacy, in particular privacy-preserving machine learning and anonymous communication. From 2016 to 2019, he visited ETH Zürich as a postdoctoral researchers after finishing his doctorate at Saarland University in 2015.

To join us for this meetup simply register via Meetup HERE.

We are looking forward on a lively discussion on the very crucial issues of the talk!