You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tested the NAMO-R1 model, and the results are amazing! Now, I’m wondering what kind of performance boost and efficiency it could achieve with MLX on Apple Silicon.
The model is in .safetensors format and includes:
namo.llm (a language model)
namo.ve (possibly a vision encoder)
I’d love to convert it to MLX, but I’m not sure where to start. I’ve tried extracting the weights and exploring MLX documentation, but I still need guidance on:
Way for converting safetensors to MLX-compatible format
Defining the model architecture in mlx.nn
Mapping and loading the weights correctly
Has anyone here worked on similar model conversions? Any guides, scripts, or insights would be greatly appreciated!
Can’t wait to see what magic NAMO-R1 + MLX can do.
The text was updated successfully, but these errors were encountered:
I tested the NAMO-R1 model, and the results are amazing! Now, I’m wondering what kind of performance boost and efficiency it could achieve with MLX on Apple Silicon.
The model is in .safetensors format and includes:
I’d love to convert it to MLX, but I’m not sure where to start. I’ve tried extracting the weights and exploring MLX documentation, but I still need guidance on:
Has anyone here worked on similar model conversions? Any guides, scripts, or insights would be greatly appreciated!
Can’t wait to see what magic NAMO-R1 + MLX can do.
The text was updated successfully, but these errors were encountered: