You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+37-1Lines changed: 37 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -210,7 +210,43 @@ HackCode auto-selects the best uncensored model for your hardware. All models ru
210
210
211
211
The 35B MoE model uses only 3B active parameters per token — so it runs fast — while having 35B total parameters for high-quality output. Best of both worlds.
212
212
213
-
You can also use any other Ollama model: `llama3`, `deepseek-coder`, `codestral`, `mistral` — HackCode works with all of them.
213
+
### Pull Any Model from HuggingFace
214
+
215
+
Not limited to the built-in list. During setup, press **`[h]`** to pull any GGUF model directly from [HuggingFace](https://huggingface.co):
0 commit comments