Skip to content

Flash Attention failed, using default SDPA #166

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
Ngardos opened this issue Apr 24, 2025 · 2 comments
Open

Flash Attention failed, using default SDPA #166

Ngardos opened this issue Apr 24, 2025 · 2 comments

Comments

@Ngardos
Copy link

Ngardos commented Apr 24, 2025

I have no issues with Flash Attention on my comfy install. It loads and works fine with other things but I'm getting an error stating:
Flash Attention failed, using default SDPA when the frames are being generated. Does anyone have any solution for this? Thank you.
`

Checkpoint files will always be loaded safely.
Total VRAM 24576 MB, total RAM 32457 MB
pytorch version: 2.6.0+cu126
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync
Using Flash Attention
Python version: 3.12.7 (tags/v3.12.7:0b05ead, Oct  1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)]
ComfyUI version: 0.3.29
ComfyUI frontend version: 1.17.11
@Noobkrusher3000
Copy link

I'm having the same issue on my system.

Checkpoint files will always be loaded safely.
Total VRAM 24560 MB, total RAM 31867 MB
pytorch version: 2.5.1+rocm6.2
AMD arch: gfx1100
Set vram state to: NORMAL_VRAM
Device: cuda:0 AMD Radeon RX 7900 XTX : native
Using Flash Attention
Python version: 3.9.21 (main, Jan 7 2025, 18:39:12)
[GCC 14.2.1 20240910]
ComfyUI version: 0.3.30
ComfyUI frontend version: 1.17.11

@Brie-Wensleydale
Copy link

Yeah, I'm having this problem as well.

Got a Nvidia 4090 with 24G VRAM.

The error goes away when I stop using it, but I'm having other issues.

Decidedly unfun.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants