Tianyun Yang (杨天韵)
I'm a final-year PhD student at the University of Chinese Academy of Sciences (UCAS), supervised by Juan Cao.
I am interested in mechanism interpretability and safety of AI models, covering topics such as hallucination mitigation, concept editing, model attribution, etc.
Email /
Google Scholar /
Github
|
|
Representative Researches
|
Mitigating Hallucinations in Large-Vision Language Models via Modular Attribution and Intervention
ICLR, 2025
Tianyun Yang, Ziniu Li, Juan Cao, Chang Xu
Code
/
Paper
This work adopts a modular perspective to investigate the causes of hallucination in large vision-language models, analyzing how particular components contribute to this issue and proposing methods to mitigate it.
|
Model Synthesis for Zero-shot Model Attribution
IEEE Transactions on Multimedia (TMM), 2025
Tianyun Yang, Juan Cao, Danding Wang, Chang Xu
Code
/
Paper
This work aims to develop a generalized model fingerprint extractor capable of Zero-Shot Model Attribution that effectively attributes unseen models without exposure during training. Central to our method is a model synthesis technique, which generates numerous synthetic models that mimic the fingerprint patterns of real-world generative models.
|
Pruning for Robust Concept Erasing in Diffusion Models
Workshop on Safe Generative AI at Conference on NeurIPS, 2024
Tianyun Yang, Ziniu Li, Juan Cao, Chang Xu
Paper
This work designs a robust concept erasing method based on differential pruning to eliminate harmful or copyrighted concepts from diffusion models
|
Progressive Open Space Expansion for Open-Set Model Attribution
CVPR, 2023
Tianyun Yang, Danding Wang, Fan Tang, Xinying Zhao, Juan Cao, Sheng Tang
Code
/
Paper
This work presents the first study on Open-Set Model Attribution (OSMA), to simultaneously attribute images to known models and identify those from unknown ones. We propose a Progressive Open Space Expansion (POSE) solution, which simulates open-set samples that maintain the same semantics as closed-set samples but embedded with different imperceptible traces.
|
Deepfake Network Architecture Attribution
AAAI, 2022
Tianyun Yang*, Ziyao Huang*, Juan Cao, Lei Li, Xirong Li
Code
/
Paper
This work presents the first study on Deepfake Network Architecture Attribution to attribute fake images on architecture-level. Based on an observation that GAN architecture is likely to leave globally consistent fingerprints while traces left by model weights vary in different regions, we provide a simple yet effective solution named DNA-Det for this problem.
|
2019-
|
Institute of Computing Technology, Chinese Academy of Sciences
Ph.D. in Computer Science
Advisor: Juan Cao
|
2023-2024
|
The University of Sydney
Joint Ph.D, School of Computer Science
Advisor: Chang Xu
|
2015-2019
|
Wuhan University
B.E., School of Electrical Engineering, Excellent Engineer Class
|
2022
|
First Prize of Academic Award, University of Chinese Academy of Sciences
|
2021
|
Director's Excellence Scholarship, Institute of Computing Technology
|
2021
|
The 1st Prize in Chinese AI Competition, Deepfake Identification
|
2018
|
The 1st Prize in Mathematical Contest in Modeling, Hubei Province
|
|
Reviewer: T-MM, ICML'25, ICLR’25, TMLR’24, ICLR’24, NeurIPS’24, NeurIPS’23, CVPR’23, NeurIPS’22
|
|