As the popularity of Machine Learning models continues to soar, concerns about the risks associated with black box models have become more prominent. While much attention has been given to the development of unfair models that may discriminate against certain minorities, there exists another concern often overlooked: the privacy risks posed by ML models.
Research has substantiated that ML models are susceptible to various attacks, with one notable example being the Membership Inference attack, enabling the prediction of whether a specific sample was present during training.
Join me in this talk, where I will explain the privacy risks inherent in Machine Learning models. Beyond exploring potential attacks, I will elucidate how techniques such as Differential Privacy and tools like Opacus (https://github.com/pytorch/opacus) can play crucial roles in training more robust and secure models.