-
-
Notifications
You must be signed in to change notification settings - Fork 120
Issues: Blaizzy/mlx-vlm
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Implement Persistent Prompt Cache to Reduce Time-to-First-Token in Chat Contexts
enhancement
New feature or request
#344
opened May 6, 2025 by
Blaizzy
Output from generate is a tuple, rather than a string
bug
Something isn't working
#334
opened May 2, 2025 by
jrp2014
Inaccurate Coordinate Outputs for MLX-Quantized UI-TARS-1.5 (4bit/6bit)
#330
opened Apr 28, 2025 by
francedot
Adjusting sample_utils.py for top_k and min_p Parameters
enhancement
New feature or request
#317
opened Apr 20, 2025 by
Blaizzy
Incorrect Output When Extracting Authors and Affiliations from Image using mlx_vlm.generate
#306
opened Apr 18, 2025 by
Huy2002-IT
Match the output of scipy.ndimage.zoom on multimodality's vision module
resize_image()
#302
opened Apr 17, 2025 by
Blaizzy
Issue with Finetuning Mistral (Mistral-Small-3.1-24B-Instruct-2503-4bit)
#284
opened Mar 30, 2025 by
keshavpeswani
NAMO-R1 Model is Amazing! How to Convert it to MLX?
#277
opened Mar 26, 2025 by
yourappleintelligence
It seems that version v0.1.19 does not follow instructions and only describes images.
#259
opened Mar 19, 2025 by
swlee60
Previous Next
ProTip!
Type g i on any issue or pull request to go back to the issue listing page.