Eleonora Lopez, Luigi Sigillo, Federica Colonnese, Massimo Panella and Danilo Comminiello
ISPAMM Lab and NESYA Lab , Sapienza University of Rome
- [2025.04.16] Checkpoints updated relased with also some fixes in the code!
- [2025.04.08] Presented the poster in Hyderabad, ICASSP 2025!
- [2024.12.25] Request the checkpoints available here!
- [2024.12.25] Code is available now! Welcome to watch ๐ this repository for the latest updates.
- [2024.12.20] The paper has been accepted for presentation at ICASSP 2025 ๐!
- [2024.09.17] The paper has been published on Arxiv ๐. The pdf version is available here!
Our work bridges the gap between brain signals and visual understanding by tackling the challenge of generating images directly from EEG signalsโa low-cost, non-invasive, and portable neuroimaging modality suitable for real-time applications.
We propose a streamlined framework leveraging the ControlNet adapter to condition a latent diffusion model (LDM) through EEG signals. Extensive experiments and ablation studies on popular benchmarks show our method outperforms state-of-the-art models while requiring only minimal preprocessing and fewer components.
Unlike existing methods that demand heavy preprocessing, complex architectures, and additional components like captioning models, our approach is efficient and straightforward. This enables a new frontier in real-time BCIs, advancing tasks like visual cue decoding and future neuroimaging applications.
For more evaluation, please refer to our paper for details.
conda create --name=gwit python=3.9
conda activate gwitpip install src/diffuserspip install transformers accelerate xformers==0.0.16 wandb numpy==1.26.4 datasets torchvision==0.14.1 scikit-learn torchmetrics==1.4.1 scikit-image pytorch_fidTo launch the training of the model, you can use the following command, you need to change the output_dir and also specify the gpu number you want to use, right now only 1 GPU is supported:
CUDA_VISIBLE_DEVICES=N accelerate launch src/gwit/train_controlnet.py --caption_from_classifier --subject_num=4 --pretrained_model_name_or_path=stabilityai/stable-diffusion-2-1-base --output_dir=output/model_out_CVPR_SINGLE_SUB_CLASSIFIER_CAPTION --dataset_name=luigi-s/EEG_Image_CVPR_ALL_subj --conditioning_image_column=conditioning_image --image_column=image --caption_column=caption --resolution=512 --learning_rate=1e-5 --train_batch_size=8 --num_train_epochs=50 --tracker_project_name=controlnet --enable_xformers_memory_efficient_attention --checkpointing_steps=1000 --validation_steps=500 --report_to wandb --validation_image ./using_VAL_DATASET_PLACEHOLDER.jpeg --validation_prompt "we are using val dataset hopefuly"You can change the dataset using one of, with the dataset_name parameter:
- luigi-s/EEG_Image_CVPR_ALL_subj
- luigi-s/EEG_Image_TVIZ_ALL_subj
Request access to the pretrained models from Google Drive.
To launch the generation of the images from the model, you can use the following commands:
CUDA_VISIBLE_DEVICES=N python src/gwit/generate_controlnet.py --controlnet_path=EEGCVPR40_single_22k_guess_drop/checkpoint-22000/controlnet/ --caption --single_image_for_eval --guessCUDA_VISIBLE_DEVICES=N python src/gwit/generate_controlnet.py --controlnet_path=EEGCVPR40_multi_20k_guess_drop/ --caption --single_image_for_eval --guessCUDA_VISIBLE_DEVICES=N python src/gwit/generate_controlnet.py --controlnet_path=TVIZ_MULTISUB_19k_guess_drop/checkpoint-19000/controlnet/ --caption --single_image_for_eval --guessImportant DO NOT CHANGE the checkpoint folder name because it is mandatory to use that naming convention to trigger some parts of the code, I need to refactor that I know :( .
Request access to the pretrained models from Google Drive.
To launch the testing of the model, you can use the following command, you need to change the output_dir:
CUDA_VISIBLE_DEVICES=N python src/gwit/evaluation/evaluate.py --controlnet_path=output/model_out_CVPR_SINGLE_SUB_CLASSIFIER_CAPTION/checkpoint-24000/controlnet/ --guessThe dataset used are hosted on huggingface:
Please cite our work if you found it useful:
@INPROCEEDINGS{lopezsigillogwit,
author={Lopez, Eleonora and Sigillo, Luigi and Colonnese, Federica and Panella, Massimo and Comminiello, Danilo},
booktitle={ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
title={Guess What I Think: Streamlined EEG-to-Image Generation with Latent Diffusion Models},
year={2025},
volume={},
number={},
pages={1-5},
keywords={Neuroimaging;Adaptation models;Visualization;Streaming media;Functional magnetic resonance imaging;Diffusion models;Brain modeling;Electroencephalography;Real-time systems;Spatial resolution;EEG;Diffusion Models;Image Generation},
doi={10.1109/ICASSP49660.2025.10890059}}
This project is based on diffusers. Thanks for their awesome work.


