Security of AI Systems

The recent success of artificial intelligence (AI) in many domains, e.g., computer vision, natural language processing, and medical diagnosis, has shown its great power. To help with the convenient AI model training and deployment, many cloud providers have offered Machine-Learning-as-a-Service (MLaaS). However, protecting the security and privacy of cloud-based AI systems remains challenging. The model owners and end-users have limited control over their model and data in the cloud. More importantly, the cloud provider can be untrusted (or even compromised), given the large number of cyberattacks nowadays. In this project, we investigate the security and privacy threats and defenses of cloud-based and IoT-based AI systems. First, we show a model inversion attack in the IoT-cloud collaborative inference system. An attacker on the cloud side can fully recover the user's input with high fidelity. Second, we propose a practical defense to mitigate the model inversion attack. Our solution specifically balances the privacy and usability of the IoT-cloud collaborative inference system. Third, we propose sensitive-sample fingerprinting, an efficient and effective method to enable end-users to self-serve check the integrity of their model in the cloud, without help from the cloud providers. We hope this line of research can shed light on protecting the security of AI systems.


Attacks and Defenses in IoT-Cloud systems

This project aims to explore model inversion attacks that leak user's inference data in IoT-Cloud systems and develop defenses against them.
  1. Zecheng He, Tianwei Zhang and Ruby B. Lee, "Model Inversion Attack against Collaborative Inference", Annual Computer Security Applications Conference (ACSAC), 2019
  2. Zecheng He, Tianwei Zhang and Ruby B. Lee, "Attacking and Protecting Data Privacy in Edge-Cloud Collaborative Inference Systems", IEEE Internet of Things Journal (IoTJ), 2019

Integrity Protection of Deep Learning Models

This project proposes an approach for end-users to check their model integrity in the cloud without the help of a cloud provider.
  1. Zecheng He, Tianwei Zhang and Ruby B. Lee, "Sensitive-Sample Fingerprinting of Deep Neural Networks", IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019