Ms Ronghui Mu
Lecturer
Computer Science
My research focuses on evaluating and improving the robustness of deep neural networks (DNNs), with a particular emphasis on safety-critical applications. This includes adversarial attacks and defenses, formal robustness verification, and systematic safety testing of DNNs.
I have conducted robustness analysis across a wide range of AI systems, including:
- Image classifiers
- Video recognition models
- 3D point cloud networks
- Reinforcement learning agents
- Large language models (LLMs)
My long-term vision is to develop safe, reliable, and trustworthy deep learning systems that can withstand real-world uncertainties and malicious perturbations.
I am currently seeking motivated PhD students with a strong background in machine learning and an interest in AI safety. Research topics of interest include:
- Safe, Secure, and Explainable AI
- Adversarial Machine Learning and Robustness
- Probabilistic Verification and Formal Methods in AI
- Reinforcement Learning and Its Applications
- Natural Language Processing, Computer Vision, and Generative AI (LLMs, VLMs)
Candidates with a strong academic foundation and genuine passion for trustworthy AI are encouraged to get in touch. Additionally, each year, there are CSC (China Scholarship Council) PhD scholarship opportunities available for Chinese students. Please feel free to contact me if you are interested.