You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
📄 ACL 2024: RGCL, Retrieval-Guided Contrastive Learning for Hateful Meme Detection 📄 EMNLP 2025 (Oral): RA-HMD, Robust Adaptation of Large Multimodal Models for Retrieval-Augmented Hateful Meme Detection Official implementation with pretrained models and reproduction scripts.
This project focuses on social media multimodal hate detection, addressing challenges such as image–text semantic inconsistency, implicit references, and sarcastic/ironic expressions. We develop a CLIP + LLMs based framework for multimodal representation learning and semantic alignment. Prompt Engineering (PE) optimizes LLM output, while multi-LLM
🔬 Official implementation of ExPO-HM: Learning to Explain-then-Detect for Hateful Meme Detection (ICLR 2026). Novel multimodal RL approach for interpretable and explainable content moderation.