Welcome to Zhenlan’s homepage!
Updated on 27 Oct. 2025
Zhenlan Ji is a fourth-year Ph.D. candidate at the Department of Computer Science and Engineering, Hong Kong University of Science and Technology, under the supervision of Prof. Shuai Wang. In 2021, he received his Bachelor’s degree in Computer Science and Technology from Nanjing University, Nanjing, China. His research interests include Software Engineering and Deep Learning, with a focus on Causality and LLM-based Agents.
Starting from Feb. 2025, he is taking a short-term visit to the Momentum Lab at the University of Tokyo, Japan, under the supervision of Prof. Lei Ma.
Education
- Ph.D. in Computer Science and Engineering, The Hong Kong University of Science and Technology.
Sept 2021 - Now - B.S. in Computer Science and Technology (FinTech), Nanjing University.
Sept 2017 - June 2021
Publications
[SIGMOD] Privacy-preserving and Verifiable Causal Prescriptive Analytics.
Zhaoyu Wang, Pingchuan Ma, Zhantong Xue, Yanbo Dai, Zhenlan Ji, and Shuai Wang.
In ACM SIGMOD International Conference on Management of Data, 2026.[SIGMOD] Guardrail: Automated Integrity Constraint Synthesis From Noisy Data.
Pingchuan Ma, Zhaoyu Wang, Zhenlan Ji, Zongjie Li, Ao Sun, and Shuai Wang.
In ACM SIGMOD International Conference on Management of Data, 2026.[CCS] The Phantom Menace in Crypto-Based PET-Hardened Deep Learning Models: Invisible Configuration-Induced Attacks.
Yiteng Peng, Dongwei Xiao, Zhibo Liu, Zhenlan Ji, Daoyuan Wu, Shuai Wang, and Juergen Rahmel.
In ACM SIGSAC Conference on Computer and Communications Security, 2025.[ISSTA] Causality-Aided Evaluation and Explanation of Large Language Model-based Code Generation.
Zhenlan JI, Pingchuan Ma, Zongjie Li, Zhaoyu Wang, Shuai Wang.
In The 34th ACM SIGSOFT International Symposium on Software Testing and Analysis, 2025.
[preprint][TOSEM] Reeq: Testing and Mitigating Ethically Inconsistent Suggestions of Large Language Models with Reflective Equilibrium.
Pingchuan Ma, Zhaoyu Wang, Zongjie Li, Zhenlan Ji, Ao Sun, Juergen Rahmel, and Shuai Wang.
In ACM Transactions on Software Engineering and Methodology, 2025.[USENIX] SelfDefend: LLMs Can Defend Themselves against Jailbreaking in a Practical Manner.
Xunguang Wang, Daoyuan Wu, Zhenlan Ji, Zongjie Li, Pingchuan Ma, Shuai Wang, Yingjiu Li, Yang Liu, Ning Liu, Juergen Rahmel.
In The 34th USENIX Security Symposium, 2025.
[preprint][ICSE] Testing and Understanding Deviation Behaviors in FHE-hardened Machine Learning Models..
Yiteng Peng, Daoyuan Wu, Zhibo Liu, Dongwei Xiao, Zhenlan Ji, Juergen Rahmel, and Shuai Wang.
In The 47th IEEE/ACM International Conference on Software Engineering, 2025.[ICSE] Enabling Runtime Verification of Causal Discovery Algorithms with Automated Conditional Independence Reasoning.
Pingchuan Ma, Zhenlan Ji, Peisen Yao, Shuai Wang, and Kui Ren.
In The 46th IEEE/ACM International Conference on Software Engineering, 2024.[ASE] Causality-Aided Trade-off Analysis for Machine Learning Fairness.
Zhenlan JI, Pingchuan Ma, Shuai Wang, Yanhui Li.
In The 38th IEEE/ACM International Conference on Automated Software Engineering, 2023.
[preprint] [code][ASE] PerfCE: Performance Debugging on Databases with Chaos Engineering-Enhanced Causality Analysis.
Zhenlan JI, Pingchuan Ma, Shuai Wang.
In The 38th IEEE/ACM International Conference on Automated Software Engineering, 2023.
[preprint] [code] [documentation][ICSE] CC: Causality-Aware Coverage Criterion for Deep Neural Networks.
Zhenlan JI, Pingchuan Ma, Yuanyuan Yuan, Shuai Wang.
In The 45th IEEE/ACM International Conference on Software Engineering, 2023.
[code][SEKE] Unlearnable Examples: Protecting Open-Source Software from Unauthorized Neural Code Learning.
Zhenlan JI, Pingchuan Ma, Shuai Wang.
In The 34th International Conference on Software Engineering and Knowledge Engineering, 2022.
[code][TIFS] NoLeaks: Differentially Private Causal Discovery Under Functional Causal Model.
Pingchuan Ma, Zhenlan Ji, Qi Pang, Shuai Wang.
In IEEE Transactions on Information Forensics and Security, 2022.
Preprint
[Arxiv] Digging Into the Internal: Causality-Based Analysis of LLM Function Calling.
Zhenlan JI, Daoyuan Wu, Pingchuan Ma, Zongjie Li, Shuai Wang.[Arxiv] Disabling Self-Correction in Retrieval-Augmented Generation via Stealthy Retriever Poisoning.
Yanbo Dai, Zhenlan Ji, Zongjie Li, Kuan Li, Shuai Wang.[Arxiv] Evaluating LLMs on Sequential API Call Through Automated Test Generation.
Yuheng Huang, Da Song, Zhenlan Ji, Shuai Wang, and Lei Ma.[Arxiv] SoK: Evaluating Jailbreak Guardrails for Large Language Models.
Xunguang Wang, Zhenlan Ji, Wenxuan Wang, Zongjie Li, Daoyuan Wu, and Shuai Wang.[Arxiv] Ip Leakage Attacks Targeting LLM-Based Multi-Agent Systems.
Liwen Wang, Wenxuan Wang, Shuai Wang, Zongjie Li, Zhenlan Ji, Zongyi Lyu, Daoyuan Wu, and Shing-Chi Cheung.[Arxiv] NAMET: Robust Massive Model Editing via Noise-Aware Memory Optimization.
Yanbo Dai, Zhenlan Ji, Zongjie Li, and Shuai Wang.[Arxiv] Stshield: Single-Token Sentinel for Real-Time Jailbreak Detection in Large Language Models.
Xunguang Wang, Wenxuan Wang, Zhenlan Ji, Zongjie Li, Pingchuan Ma, Daoyuan Wu, and Shuai Wang.[Arxiv] Testing and Understanding Erroneous Planning in LLM Agents through Synthesized User Inputs.
Zhenlan JI, Daoyuan Wu, Pingchuan Ma, Zongjie Li, Shuai Wang.[Arxiv] InstructTA: Instruction-Tuned Targeted Attack for Large Vision-Language Models.
Xunguang Wang, Zhenlan Ji, Pingchuan Ma, Zongjie Li, Shuai Wang.
Academic Services
Reviewer:
- 2025: ICLR, JSS
Program Committee:
- 2025: AAAI
Shadow Program Committee:
- 2025: ICSE
Artifact Evaluation Committee:
- 2022: ISSTA
- 2023: ISSTA, CCS
- 2024: ICSE
External Reviewer:
- 2022: ASE, AsiaCCS
- 2023: ISSTA, USENIX Security, FSE, CCS, ASE
- 2024: ISSTA, S&P, USENIX Security
- 2025: NDSS, ACL, ASE
Publicity Chair:
- 2024: AISTA@ISSRE 2024
Teaching
- Teaching Assistant, COMP2011 - Programming with C++, HKUST, 2022 Fall.
- Teaching Assistant, COMP3633 - Competitive Programming in Cybersecurity II, HKUST, 2023 Spring.
- Teaching Assistant, COMP2633 - Competitive Programming in Cybersecurity I, HKUST, 2024 Fall.
