Skip to content
View kaiyuanCui's full-sized avatar

Block or report kaiyuanCui

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don’t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
kaiyuanCui/README.md

Kaiyuan Cui

PhD student at the University of Melbourne working on Trustworthy AI — currently focused on adversarial attacks and safety alignment for vision-language models.

Our paper UltraBreak on universal jailbreak attacks for VLMs was accepted at ICLR 2026.

Open to collaborations on adversarial ML and AI safety. Feel free to reach out!

Website LinkedIn email

Pinned Loading

  1. UltraBreak UltraBreak Public

    [ICLR2026] Toward Universal and Transferable Jailbreak Attacks on Vision-Language Models

    Python 19 1

  2. Bagel3D Bagel3D Public

    A simple 3D engine based on Bagel (Basic Academic Graphical Engine Library)

    Java 1