WELCOME TO T-AI

Trustworthy Artificial Intelligence (T-AI) lab is committed to advancing the frontiers of machine learning and security research, and applying breakthroughs to real-world problems.

Research for Fun:
  • Trustworthy ML:With the goal of building robust, secure machine learning models, we explore topics such as adversarial attack/defense, backdoor attack/defense, robust and interpretable machine learning.
  • Privacy Protection:We prioritize privacy protection for ML systems by researching techniques like differential privacy, secure multi-party computation, and homomorphic encryption. We also monitor new privacy issues like membership inference and model extraction, etc.
Research for World:
  • AI for Driving: We make self-driving cars safer by creating smart technology that helps them "see" better, make better decisions, and follow laws. We also test their abilities and make sure they're safe and ethical to use.
  • AI for Medicine: We collaborate with healthcare providers to develop precise and efficient AI algorithms for medical tasks like imaging and drug discovery, with a particular emphasis on nuclear medicine at present.


AISP News
2023

26

07
PUBLICATION

T-AI lab's paper 'PointCRT: Detecting Backdoor in 3D Point Cloud via Corruption Robustness' was accepted by ACM MM 2023. Congratulations

2023

26

07
PUBLICATION

T-AI lab's paper 'AdvCLIP: Downstream-agnostic Adversarial Examples in Multimodal Contrastive Learning' was accepted by ACM MM 2023. Congratulations

2023

26

07
PUBLICATION

T-AI lab's paper 'A Four-Pronged Defense Against Byzantine Attacks in Federated Learning' was accepted by ACM MM 2023. Congratulations

2023

14

07
PUBLICATION

T-AI lab's paper 'Benchmarking and Analyzing Robust Point Cloud Recognition: Bag of Tricks for Defending Adversarial Examples' was accepted by ICCV 2023. Congratulations

2023

14

07
PUBLICATION

T-AI lab's paper 'Downstream-agnostic Adversarial Examples' was accepted by ICCV 2023. Congratulations

Contact
Address:WuHan, China