PhD student at the University of Melbourne working on Trustworthy AI — currently focused on adversarial attacks and safety alignment for vision-language models.
Our paper UltraBreak on universal jailbreak attacks for VLMs was accepted at ICLR 2026.
Open to collaborations on adversarial ML and AI safety. Feel free to reach out!




