Fusion Segment Transformer: Bi-Directional Attention Guided Fusion Network for
AI-generated Music Detection
Yumin Kim* • Seonghyeon Go*
MIPPIA Inc.
Official PyTorch implementation of Fusion Segment Transformer (FST) for AI-generated music detection. This repository contains code and pretrained models for detecting AI-generated music audio.
Keywords: AI music detection, AIGM, music deepfake detection, Deepfake music
With the rise of generative AI technology, anyone can now easily create and deploy AI-generated music, which has heightened the need for technical solutions to address copyright and ownership issues. While existing works have largely focused on short-audio, the challenge of full-audio detection, which requires modeling long-term structure and context, remains insufficiently explored. To address this, we propose an improved version of the Segment Transformer, termed Fusion Segment Transformer. As in our previous work, we extract content embeddings from short music segments using diverse feature extractors. Furthermore, we enhance the architecture for full-audio AI-generated music detection by introducing a Gated Fusion Layer that effectively integrates content and structural information, enabling the capture of long-term context. Experiments on the SONICS and AIME datasets show that our approach consistently outperforms the previous model and recent baselines, achieving state-of-the-art results in full-audio segment detection.
This repository can be installed simply by cloning the GitHub repository and installing the required dependencies.
git clone https://github.com/Mippia/FST-AI-music-detection.git
cd FST-AI-music-detectionTo set up their environment, please run:
pip install -r requirements.txtTo get started, download the our pre-trained checkpoints from Google Drive:
Use the inference.py script to check if music is AI-generated or human-made. For example,
python inference.py --audio ./examples/test.wavOur code and demo website are licensed under a GPL License .