Hey there! Been trying to get TRIBE v2 running on a HuggingFace Space with ZeroGPU (free A10G tier) and running into a wall.
Setup:
- HF Space: https://huggingface.co/spaces/somebeast/tribe-v2
- Python 3.12 on ZeroGPU (zero-a10g)
- Installed via:
tribev2[plotting] @ git+https://github.com/facebookresearch/tribev2.git
- HF_TOKEN set as a Space secret,
huggingface_hub.login() called before model load
What happens:
TribeModel.from_pretrained("facebook/tribev2") throws a FileNotFoundError when called inside a @spaces.GPU decorated function. The error gets swallowed by the ZeroGPU wrapper so I can't see the actual missing file path — just the exception class.
The weird part is it fails in ~11 seconds consistently, which seems too fast for a download attempt — feels like it's looking for a local path that doesn't exist in the GPU worker's filesystem.
What I've tried:
- Pre-downloading with
snapshot_download("facebook/tribev2") at startup, passing the local path — same error
- Caching to
~/.cache/, /tmp/, home dir — all fail (ZeroGPU workers seem to have isolated filesystems)
- Calling
login(token=...) inside the GPU function — no change
- Setting
cache_folder param — same error
Questions:
- Does
from_pretrained try to load sub-models (LLaMA 3.2-3B, VJEPA2, Wav2Vec) from specific local paths? If so, is there a way to control where it looks?
- Has anyone successfully run TRIBE v2 on HuggingFace Spaces (not Colab)?
- Would it help to have a version of
from_pretrained that accepts pre-downloaded model objects instead of downloading them internally?
I've got a workaround using Phi-2 as a text encoder with perplexity-based scoring, but would love to use the real model. Any pointers appreciated!
Hey there! Been trying to get TRIBE v2 running on a HuggingFace Space with ZeroGPU (free A10G tier) and running into a wall.
Setup:
tribev2[plotting] @ git+https://github.com/facebookresearch/tribev2.githuggingface_hub.login()called before model loadWhat happens:
TribeModel.from_pretrained("facebook/tribev2")throws aFileNotFoundErrorwhen called inside a@spaces.GPUdecorated function. The error gets swallowed by the ZeroGPU wrapper so I can't see the actual missing file path — just the exception class.The weird part is it fails in ~11 seconds consistently, which seems too fast for a download attempt — feels like it's looking for a local path that doesn't exist in the GPU worker's filesystem.
What I've tried:
snapshot_download("facebook/tribev2")at startup, passing the local path — same error~/.cache/,/tmp/, home dir — all fail (ZeroGPU workers seem to have isolated filesystems)login(token=...)inside the GPU function — no changecache_folderparam — same errorQuestions:
from_pretrainedtry to load sub-models (LLaMA 3.2-3B, VJEPA2, Wav2Vec) from specific local paths? If so, is there a way to control where it looks?from_pretrainedthat accepts pre-downloaded model objects instead of downloading them internally?I've got a workaround using Phi-2 as a text encoder with perplexity-based scoring, but would love to use the real model. Any pointers appreciated!