Information Leakage in ML Deployments: How, When, and Why?

Wednesday, March 16, -
Speaker(s): Varun Chandrasekaran
Machine learning (ML) is widely used today, ranging from applications in medicine to those in autonomous driving. Across all these applications, various forms of sensitive information is shared with the ML model, such as private medical records, or a user's location. In this talk, I will explain what forms of private information can be learnt through interacting with the ML model. In particular, I will discuss when ML model parameters in cloud deployments are not confidential, and how this can be remediated. Next, I will discuss how model parameters learn private user information, how this can be prevented, and when such prevention mechanisms fail. Finally, I will reason about why certain ML models are more vulnerable to privacy leakage.

Speaker Varun Chandrasekaran is a doctoral candidate at the University of Wisconsin-Madison, where he works with Suman Banerjee and Somesh Jha. His areas of research interest are at the intersection of security, privacy, systems, and machine learning. His work aims to understand the security & privacy vulnerabilities of real-world ML deployments to design practical intervention by providing theoretical insight behind privacy violations in ML models.
Sponsor

Computer Science

Co-Sponsor(s)

Electrical and Computer Engineering (ECE); Mathematics; Pratt School of Engineering; Statistical Science

Information Leakage in ML Deployments: How, When, and Why?

Contact

Tatiana Phillips