Specification Version: 0.1.7 Status: Stable (backward-compatible) Released: MLX-Knife 2.0.4-beta.1
Based on GitHub Issue #8 - Comprehensive JSON output support for all commands
MLX Knife is promoted as a "scriptable" tool, but formatted terminal output makes automation difficult. JSON output enables robust scripting integration and broke-cluster compatibility.
MLX Knife distinguishes between two levels of model validation:
- Purpose: Verify that downloaded model files are complete and uncorrupted
- Scope: File-level validation only
- Checks:
- Required files present (config.json, weights, tokenizer files)
- No Git LFS pointers instead of actual files
- JSON files are valid JSON
- States:
"healthy"|"unhealthy" - Always included: In all
modelObjectinstances
- Purpose: Verify that model can be executed with
mlx-lm - Scope: Framework and model architecture validation
- Checks:
- Framework is MLX (GGUF/PyTorch models fail)
- Model architecture supported by current mlx-lm version
- Respects
MODEL_REMAPPING(e.g.,mistral→llama)
- States:
true|false - Always included: In all
modelObjectinstances
- Runtime compatibility check requires integrity check first
- If integrity check fails (
health: "unhealthy"), runtime check is skipped (runtime_compatible: false) reasonfield describes the first problem found:- Integrity problems take precedence
- Runtime problems only shown if files are healthy
nullwhen both checks pass (health: "healthy"ANDruntime_compatible: true)
Healthy MLX Model (Compatible):
/* Illustrative snippet - not a complete response */
{
"health": "healthy",
"runtime_compatible": true,
"reason": null
}GGUF Model (Files OK, Not Executable):
/* Illustrative snippet - not a complete response */
{
"health": "healthy",
"runtime_compatible": false,
"reason": "Framework GGUF not executable with mlx-lm (requires MLX)"
}Unsupported Architecture:
/* Illustrative snippet - not a complete response */
{
"health": "healthy",
"runtime_compatible": false,
"reason": "Model architecture 'qwen3_next' requires mlx-lm >= 0.28.0 (current: 0.27.1)"
}Incomplete Download (Runtime Check Skipped):
/* Illustrative snippet - not a complete response */
{
"health": "unhealthy",
"runtime_compatible": false,
"reason": "config.json missing"
}All commands require the --json flag for JSON output:
mlxk2 list --json # JSON output
mlxk2 list # Human-readable output- CLI version (human):
mlxk2 --version
- CLI version (JSON):
mlxk2 --version --json
JSON output example:
{
"status": "success",
"command": "version",
"data": {
"cli_version": "2.0.4-beta.1",
"json_api_spec_version": "0.1.7",
"system": {
"memory_total_bytes": 137438953472
}
},
"error": null
}Notes:
- Regular command responses (e.g.,
list,show) do not include a separate protocol tag; the spec version is reported by theversioncommand indata.json_api_spec_version. systemobject isnullon non-macOS platforms wheresysctl hw.memsizeis unavailable (0.1.6+).
All commands support consistent JSON output with standardized error handling and exit codes.
All commands that return model information use the same minimal model object.
name: Full HF nameorg/model.hash: 40-char snapshot commit of the selected snapshot, ornull.size_bytes: Total size in bytes of files under the selected path (snapshot preferred, else model root).last_modified: ISO-8601 UTC timestamp (withZ) of the selected path.framework: "MLX" | "GGUF" | "PyTorch" | "Unknown".model_type: "chat" | "embedding" | "base" | "unknown".capabilities: e.g., ["text-generation", "chat"], ["embeddings"], or ["text-generation", "chat", "vision"].health: "healthy" | "unhealthy" (always present).runtime_compatible:true|false(0.1.5+, always present).reason:string | null(0.1.5+, describes first problem found, null when both checks pass).cached:truefor cache-managed models (HuggingFace cache),falsefor workspace paths (user-managed local directories).
Notes:
- No human-readable
sizefield; onlysize_bytes. - No human-readable "modified" field;
last_modifiedis authoritative. - No absolute filesystem paths are exposed (except for workspace paths where
nameis the full path). runtime_compatibleandreasonfields added in spec version 0.1.5 (Issue #36).visioncapability added in 0.1.5 as a backward-compatible enum extension (ADR-012 Phase 1a).cachedfield distinguishes cache-managed models (true) from workspace paths (false). Workspace support added in ADR-018 Phase 0c.
| Command | Description | JSON-Only in 2.0 | Alpha Feature |
|---|---|---|---|
list |
List models with metadata and hash codes | ✅ | - |
show |
Detailed model inspection with files/config | ✅ | - |
health |
Check model integrity and corruption | ✅ | - |
pull |
Download models from HuggingFace | ✅ | - |
rm |
Delete models from cache | ✅ | - |
clone |
Clone models to workspace directory | ✅ | - |
convert |
Workspace transformations (experimental: --repair-index) | ✅ | MLXK2_ENABLE_ALPHA_FEATURES=1 |
push |
Upload a local folder to Hugging Face | ✅ | - |
run |
Execute model inference | ✅ | - |
serve/server |
OpenAI-compatible API server | ✅ | - |
Notes:
- Commands marked with
MLXK2_ENABLE_ALPHA_FEATURES=1are experimental and require this environment variable. - Workspace Path Support (ADR-018 Phase 0c): Commands
show,run,serve/server, andhealthnow accept workspace paths (e.g.,./workspaceor/absolute/path) in addition to HuggingFace model IDs. Models in workspaces return"cached": falseto distinguish them from cache-managed models.
Model Types (currently detected values, not normative):
"chat"- Language models with chat/instruction capability"embedding"- Embedding models for vector representations"base"- Base models for text completion (no chat template)"unknown"- Cannot determine model type from config
Capabilities Array:
Note: This list is not normative. mlx-knife reports capabilities it detects - models may have additional capabilities. See mlxk2/core/capabilities.py Capability enum for the authoritative list of currently detected values.
"text-generation"- Can generate text (all non-embedding models)"chat"- Supports chat template/instruction format (absence = base/completion model)"embeddings"- Can generate embeddings (mutually exclusive with text-generation)"vision"- Accepts image inputs (detected viamodel_typein vision families or presence ofpreprocessor_config.json)"audio"- Accepts audio inputs (detected viaaudio_configormodel_typein audio families)
Vision Example (Phase 1a, ADR-012):
{
"status": "success",
"command": "list",
"data": {
"models": [
{
"name": "mlx-community/llava-1.5-7b-hf-4bit-mlx",
"hash": "a5339a41b2e3abcdefgh1234567890ab12345678",
"size_bytes": 4613734656,
"last_modified": "2024-12-03T10:00:00Z",
"framework": "MLX",
"model_type": "chat",
"capabilities": ["text-generation", "chat", "vision"],
"health": "healthy",
"runtime_compatible": true,
"reason": null,
"cached": true
}
],
"count": 1
},
"error": null
}Workspace Path Example (Phase 0c, ADR-018):
{
"status": "success",
"command": "show",
"data": {
"model": {
"name": "/Users/dev/my-workspace",
"hash": null,
"size_bytes": 4613734656,
"last_modified": "2025-12-29T14:30:00Z",
"framework": "MLX",
"model_type": "chat",
"capabilities": ["text-generation", "chat"],
"health": "healthy",
"runtime_compatible": true,
"reason": null,
"cached": false
},
"metadata": {
"model_type": "llama",
"quantization": "4bit"
}
},
"error": null
}Basic Usage:
mlxk-json list --json # All models with full validation
mlxk-json list "mlx-community" --json # Filter by pattern
mlxk-json list "Llama" --json # Fuzzy matchingBehavior:
- Returns all cached models with complete metadata
- Performs both integrity and runtime compatibility checks (0.1.5+)
- Pattern filter is a case-insensitive substring match on
name
JSON Schema:
{
"status": "success",
"command": "list",
"data": {
"models": [
{
"name": "mlx-community/Phi-3-mini-4k-instruct-4bit",
"hash": "a5339a41b2e3abcdefgh1234567890ab12345678",
"size_bytes": 4613734656,
"last_modified": "2024-10-15T08:23:41Z",
"framework": "MLX",
"model_type": "chat",
"capabilities": ["text-generation", "chat"],
"health": "healthy",
"runtime_compatible": true,
"reason": null,
"cached": true
},
{
"name": "mlx-community/mxbai-embed-large-v1",
"hash": "b5679a5f90abcdef1234567890abcdef12345678",
"size_bytes": 1200000000,
"last_modified": "2024-10-20T10:30:15Z",
"framework": "MLX",
"model_type": "embedding",
"capabilities": ["embeddings"],
"health": "healthy",
"runtime_compatible": true,
"reason": null,
"cached": true
},
{
"name": "TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF",
"hash": "e96c7a5f90abcdef1234567890abcdef12345678",
"size_bytes": 16900000000,
"last_modified": "2024-09-20T14:15:22Z",
"framework": "GGUF",
"model_type": "chat",
"capabilities": ["text-generation", "chat"],
"health": "healthy",
"runtime_compatible": false,
"reason": "Framework GGUF not executable with mlx-lm (requires MLX)",
"cached": true
},
{
"name": "mlx-community/Qwen3-Next-80B-A3B-Instruct-4bit",
"hash": "f1234a5f90abcdef1234567890abcdef12345678",
"size_bytes": 45000000000,
"last_modified": "2024-10-01T09:15:30Z",
"framework": "MLX",
"model_type": "chat",
"capabilities": ["text-generation", "chat"],
"health": "healthy",
"runtime_compatible": false,
"reason": "Model architecture 'qwen3_next' requires mlx-lm >= 0.28.0 (current: 0.27.1)",
"cached": true
},
{
"name": "corrupted/incomplete-download",
"hash": "c9876a5f90abcdef1234567890abcdef12345678",
"size_bytes": 2500000000,
"last_modified": "2024-09-15T12:00:00Z",
"framework": "MLX",
"model_type": "unknown",
"capabilities": [],
"health": "unhealthy",
"runtime_compatible": false,
"reason": "config.json missing",
"cached": true
}
],
"count": 12
},
"error": null
}Empty Cache:
{
"status": "success",
"command": "list",
"data": {
"models": [],
"count": 0
},
"error": null
}Usage:
mlxk-json health --json # Check all models
mlxk-json health "Phi-3" --json # Check specific pattern
mlxk-json health "Qwen3@e96" --json # Check specific hashHealthy Models:
{
"status": "success",
"command": "health",
"data": {
"healthy": [
{
"name": "mlx-community/Phi-3-mini-4k-instruct-4bit",
"status": "healthy",
"reason": "Model is healthy"
}
],
"unhealthy": [],
"summary": {
"total": 1,
"healthy_count": 1,
"unhealthy_count": 0
}
},
"error": null
}Unhealthy Models (Real Scenario):
{
"status": "success",
"command": "health",
"data": {
"healthy": [],
"unhealthy": [
{
"name": "mlx-community/Phi-3-mini-4k-instruct-4bit",
"status": "unhealthy",
"reason": "config.json missing"
},
{
"name": "corrupted/model",
"status": "unhealthy",
"reason": "LFS pointers instead of files: model.safetensors"
}
],
"summary": {
"total": 2,
"healthy_count": 0,
"unhealthy_count": 2
}
},
"error": null
}Ambiguous Pattern:
{
"status": "error",
"command": "health",
"data": null,
"error": {
"type": "ambiguous_match",
"message": "Multiple models match 'Llama'",
"matches": [
"mlx-community/Llama-3.2-1B-Instruct-4bit",
"mlx-community/Llama-3.2-3B-Instruct-4bit"
]
}
}Usage:
mlxk-json show "Phi-3-mini" --json # Short name expansion
mlxk-json show "mlx-community/Phi-3-mini" --json # Full name
mlxk-json show "Qwen3@e96" --json # Specific hash
mlxk-json show "Phi-3-mini" --files --json # Include file listing
mlxk-json show "Phi-3-mini" --config --json # Include config.json contentBasic Model Information:
{
"status": "success",
"command": "show",
"data": {
"model": {
"name": "mlx-community/Phi-3-mini-4k-instruct-4bit",
"hash": "a5339a41b2e3abcdefgh1234567890ab12345678",
"size_bytes": 4613734656,
"framework": "MLX",
"model_type": "chat",
"capabilities": ["text-generation", "chat"],
"last_modified": "2024-10-15T08:23:41Z",
"health": "healthy",
"runtime_compatible": true,
"reason": null,
"cached": true
},
"metadata": {
"model_type": "phi3",
"quantization": "4bit",
"context_length": 4096,
"vocab_size": 32064,
"hidden_size": 3072,
"num_attention_heads": 32,
"num_hidden_layers": 32
}
},
"error": null
}With Files Listing (--files):
{
"status": "success",
"command": "show",
"data": {
"model": {
"name": "mlx-community/Phi-3-mini-4k-instruct-4bit",
"hash": "a5339a41b2e3abcdefgh1234567890ab12345678",
"size_bytes": 4613734656,
"framework": "MLX",
"model_type": "chat",
"capabilities": ["text-generation", "chat"],
"last_modified": "2024-10-15T08:23:41Z",
"health": "healthy",
"runtime_compatible": true,
"reason": null,
"cached": true
},
"files": [
{"name": "config.json", "size": "1.2KB", "type": "config"},
{"name": "model.safetensors", "size": "2.3GB", "type": "weights"},
{"name": "model-00001-of-00002.safetensors", "size": "1.8GB", "type": "weights"},
{"name": "model-00002-of-00002.safetensors", "size": "200MB", "type": "weights"},
{"name": "tokenizer.json", "size": "2.1MB", "type": "tokenizer"},
{"name": "tokenizer_config.json", "size": "3.4KB", "type": "config"},
{"name": "special_tokens_map.json", "size": "588B", "type": "config"}
],
"metadata": null
},
"error": null
}With Config Content (--config):
{
"status": "success",
"command": "show",
"data": {
"model": {
"name": "mlx-community/Phi-3-mini-4k-instruct-4bit",
"hash": "a5339a41b2e3abcdefgh1234567890ab12345678",
"size_bytes": 4613734656,
"framework": "MLX",
"model_type": "chat",
"capabilities": ["text-generation", "chat"],
"last_modified": "2024-10-15T08:23:41Z",
"health": "healthy",
"runtime_compatible": true,
"reason": null,
"cached": true
},
"config": {
"architectures": ["Phi3ForCausalLM"],
"model_type": "phi3",
"vocab_size": 32064,
"hidden_size": 3072,
"intermediate_size": 8192,
"num_hidden_layers": 32,
"num_attention_heads": 32,
"max_position_embeddings": 4096,
"rope_theta": 10000.0,
"quantization": {
"bits": 4,
"group_size": 64
}
},
"metadata": null
},
"error": null
}Model Not Found:
{
"status": "error",
"command": "show",
"data": null,
"error": {
"type": "model_not_found",
"message": "No model found matching 'nonexistent-model'"
}
}Ambiguous Match:
{
"status": "error",
"command": "show",
"data": null,
"error": {
"type": "ambiguous_match",
"message": "Multiple models match 'Llama'",
"matches": [
"mlx-community/Llama-3.2-1B-Instruct-4bit",
"mlx-community/Llama-3.2-3B-Instruct-4bit"
]
}
}Non-normative schema for capabilities and model_type
- Removed enum constraints from
capabilitiesandmodel_typein JSON schema - Schema now accepts any string values (permissive, non-breaking change)
- mlx-knife is not a normative authority - it reports what it detects
- Authoritative list of currently detected values:
mlxk2/core/capabilities.py
Capabilities: Single Source of Truth
- Added
Capabilityenum inmlxk2/core/capabilities.pyas SSOT - Removed unused
"completion"capability (base models are["text-generation"]without"chat") - Added
"audio"capability detection (ADR-019)
System Memory Information
- Added
systemobject toversioncommand response system.memory_total_bytes: Total physical RAM in bytes (fromsysctl hw.memsize)systemisnullon non-macOS platforms where sysctl is unavailable- Enables memory-aware model loading (ADR-016)
Model Discovery: Vision capability flag
- Vision models detected via
preprocessor_config.jsonpresence visionadded tocapabilitiesenum (backward-compatible extension)- Visible in
mlxk list --json,mlxk show --json,mlxk health --json - Example:
"capabilities": ["text-generation", "chat", "vision"]
Note: Vision runtime support (mlxk run --image, Server API) is documented in README.md "Multi-Modal Support" and docs/SERVER-HANDBOOK.md.
Foundation: Model Object Schema
- Standardized
modelObjectacross all commands - Machine-readable fields:
size_bytes,last_modified(ISO-8601 UTC withZ) - No human-readable
sizeormodifiedfields (JSON consumers parse structured data)
Issue #36: Separate Integrity and Runtime Compatibility Checks
- Added
runtime_compatible: booleanfield tomodelObject - Added
reason: string | nullfield tomodelObject - Both fields always present in JSON output
runtime_compatiblechecks:- Framework must be MLX (GGUF/PyTorch fail)
- Model architecture must be supported by installed mlx-lm version
- Respects
MODEL_REMAPPINGfor aliased architectures
- Gate logic: Runtime check requires passing integrity check first
reasonfield describes first problem found (integrity > runtime priority)
Usage:
mlxk-json pull "Phi-3-mini" --json # Short name expansion
mlxk-json pull "mlx-community/Phi-3-mini" --json # Full name
mlxk-json pull "microsoft/DialoGPT-small" --json # Non-MLX modelSuccessful Download:
{
"status": "success",
"command": "pull",
"data": {
"model": "mlx-community/Phi-3-mini-4k-instruct-4bit",
"download_status": "success",
"message": "Successfully downloaded model",
"expanded_name": "mlx-community/Phi-3-mini-4k-instruct-4bit"
},
"error": null
}Already Exists (Bug - doesn't detect corruption):
{
"status": "success",
"command": "pull",
"data": {
"model": "mlx-community/Phi-3-mini-4k-instruct-4bit",
"download_status": "already_exists",
"message": "Model mlx-community/Phi-3-mini-4k-instruct-4bit already exists in cache",
"expanded_name": null
},
"error": null
}Download Failed:
{
"status": "error",
"command": "pull",
"data": {
"model": "nonexistent/model",
"download_status": "failed",
"message": "",
"expanded_name": null
},
"error": {
"type": "download_failed",
"message": "Repository not found for url: https://huggingface.co/api/models/nonexistent/model"
}
}Validation Error:
{
"status": "error",
"command": "pull",
"data": {
"model": null,
"download_status": "error",
"message": "",
"expanded_name": null
},
"error": {
"type": "ValidationError",
"message": "Model name too long: 105/96 characters"
}
}Ambiguous Match:
{
"status": "error",
"command": "pull",
"data": {
"model": null,
"download_status": "unknown",
"message": "",
"expanded_name": null
},
"error": {
"type": "ambiguous_match",
"message": "Multiple models match 'Llama'",
"matches": [
"mlx-community/Llama-3.2-1B-Instruct-4bit",
"mlx-community/Llama-3.2-3B-Instruct-4bit"
]
}
}Usage:
mlxk-json rm "Phi-3-mini" --json # Direct deletion (no locks)
mlxk-json rm "Phi-3-mini" --force --json # Force deletion (ignores locks)
mlxk-json rm "locked-model" --json # Error: requires --force due to locksSuccessful Deletion:
{
"status": "success",
"command": "rm",
"data": {
"model": "mlx-community/Phi-3-mini-4k-instruct-4bit",
"action": "deleted",
"message": "Successfully deleted mlx-community/Phi-3-mini-4k-instruct-4bit"
},
"error": null
}Model has Active Locks (requires --force):
{
"status": "error",
"command": "rm",
"data": {
"model": "mlx-community/Phi-3-mini-4k-instruct-4bit",
"locks_detected": true,
"lock_files": [".locks/model-lock-12345.lock"]
},
"error": {
"type": "locks_present",
"message": "Model has active locks. Use --force to override."
}
}Model Not Found:
{
"status": "error",
"command": "rm",
"data": null,
"error": {
"type": "model_not_found",
"message": "No models found matching 'nonexistent-model'"
}
}Ambiguous Pattern:
{
"status": "error",
"command": "rm",
"data": {
"matches": [
"mlx-community/Llama-3.2-1B-Instruct-4bit",
"mlx-community/Llama-3.2-3B-Instruct-4bit"
]
},
"error": {
"type": "ambiguous_match",
"message": "Multiple models match 'Llama'. Please specify which model to delete."
}
}Permission Error:
{
"status": "error",
"command": "rm",
"data": {
"model": "mlx-community/Phi-3-mini-4k-instruct-4bit"
},
"error": {
"type": "PermissionError",
"message": "Permission denied: Cannot delete read-only files"
}
}Usage:
mlxk-json clone "Phi-3-mini" ./workspace --json # Clone to workspace directory
mlxk-json clone "mlx-community/Phi-3-mini" ./my-model --json # Full name to custom directory
mlxk-json clone "microsoft/DialoGPT-small" ./workspace --json # Non-MLX modelSuccessful Clone:
{
"status": "success",
"command": "clone",
"data": {
"model": "mlx-community/Phi-3-mini-4k-instruct-4bit",
"clone_status": "success",
"message": "Cloned to ./workspace",
"target_dir": "./workspace",
"expanded_name": "mlx-community/Phi-3-mini-4k-instruct-4bit"
},
"error": null
}Target Directory Not Empty:
{
"status": "error",
"command": "clone",
"data": {
"model": null,
"clone_status": "error",
"target_dir": "./workspace"
},
"error": {
"type": "ValidationError",
"message": "Target directory './workspace' already exists and is not empty"
}
}Clone Failed:
{
"status": "error",
"command": "clone",
"data": {
"model": "nonexistent/model",
"clone_status": "failed",
"target_dir": "./workspace"
},
"error": {
"type": "clone_failed",
"message": "Repository not found for url: https://huggingface.co/api/models/nonexistent/model"
}
}Access Denied:
{
"status": "error",
"command": "clone",
"data": {
"model": "gated/model",
"clone_status": "access_denied",
"target_dir": "./workspace"
},
"error": {
"type": "access_denied",
"message": "Access denied: gated/private model 'gated/model'. Accept terms and set HF_TOKEN."
}
}APFS Filesystem Error:
{
"status": "error",
"command": "clone",
"data": {
"model": "org/model",
"clone_status": "filesystem_error",
"target_dir": "./workspace"
},
"error": {
"type": "FilesystemError",
"message": "APFS required for clone operations."
}
}mlxk-json push <dir> <org/model> [--create] [--private] [--branch <b>] [--commit "..."] [--verbose] [--check-only] --json
Behavior:
- Requires
HF_TOKENenv. - Default branch:
main(subject to change). - Fails if repo missing unless
--createis provided. - Sends folder as-is to the specified branch using
huggingface_hub.upload_folder. --verboseaffects only human output; JSON remains unchanged in structure.--check-onlyperforms a local, content-oriented workspace validation and does not contact the Hub (no token required). Results are included underdata.workspace_health.
Successful Upload (with changes):
{
"status": "success",
"command": "push",
"data": {
"repo_id": "org/model",
"branch": "main",
"commit_sha": "abcdef1234567890abcdef1234567890abcdef12",
"commit_url": "https://huggingface.co/org/model/commit/abcdef1",
"repo_url": "https://huggingface.co/org/model",
"uploaded_files_count": 3,
"local_files_count": 11,
"no_changes": false,
"created_repo": false,
"change_summary": {"added": 1, "modified": 2, "deleted": 0},
"message": "Push successful. Clone operations require APFS filesystem.",
"experimental": true,
"disclaimer": "Experimental feature (M0: upload only). No validation/filters; review on the Hub."
},
"error": null
}No Changes (no-op commit avoided):
{
"status": "success",
"command": "push",
"data": {
"repo_id": "org/model",
"branch": "main",
"commit_sha": null,
"commit_url": null,
"repo_url": "https://huggingface.co/org/model",
"uploaded_files_count": 0,
"local_files_count": 11,
"no_changes": true,
"created_repo": false,
"message": "No files changed; skipped empty commit.",
"experimental": true,
"disclaimer": "Experimental feature (M0: upload only). No validation/filters; review on the Hub.",
"hf_logs": ["No files have been modified since last commit. Skipping to prevent empty commit."]
},
"error": null
}Check-only (no network):
{
"status": "success",
"command": "push",
"data": {
"repo_id": "org/model",
"branch": "main",
"commit_sha": null,
"commit_url": null,
"repo_url": "https://huggingface.co/org/model",
"local_files_count": 11,
"no_changes": null,
"created_repo": false,
"message": "Check-only: no upload performed.",
"workspace_health": {
"files_count": 11,
"total_bytes": 289612345,
"config": {"exists": true, "valid_json": true, "path": "/path/to/config.json"},
"weights": {"count": 3, "formats": ["safetensors"], "index": {"has_index": true, "missing": []}, "pattern_complete": true},
"anomalies": [],
"healthy": true
},
"experimental": true,
"disclaimer": "Experimental feature (M0: upload only). No validation/filters; review on the Hub."
},
"error": null
}Missing Token:
{
"status": "error",
"command": "push",
"data": {
"repo_id": "org/model",
"branch": "main",
"repo_url": "https://huggingface.co/org/model",
"uploaded_files_count": null,
"local_files_count": null,
"no_changes": null,
"created_repo": false,
"experimental": true,
"disclaimer": "Experimental feature (M0: upload only). No validation/filters; review on the Hub."
},
"error": {
"type": "auth_error",
"message": "HF_TOKEN not set"
}
}All errors follow consistent format with detailed error types:
Validation Errors:
ValidationError- Invalid input (96 char limit, empty names)ambiguous_match- Multiple models match patternmodel_not_found- No models match pattern
Network Errors:
download_failed- HuggingFace API errors, network timeoutsNetworkError- Connection issues
System Errors:
PermissionError- File system permission deniedOperationError- Cache corruption, disk fullInternalError- Unexpected system errors
Error Response Schema:
{
"status": "error",
"command": "pull",
"data": { /* partial data if available */ },
"error": {
"type": "ValidationError",
"message": "Repository name exceeds HuggingFace Hub limit: 105/96 characters"
}
}Cache Corruption (Health Check Bug):
{
"status": "success",
"command": "health",
"data": {
"healthy": [],
"unhealthy": [{
"name": "mlx-community/Phi-3-mini-4k-instruct-4bit",
"status": "unhealthy",
"reason": "config.json missing"
}],
"summary": {
"total": 1,
"healthy_count": 0,
"unhealthy_count": 1
}
},
"error": null
}Pull Refuses Corrupted Model (Bug):
{
"status": "success",
"command": "pull",
"data": {
"download_status": "already_exists",
"message": "Model already exists in cache"
},
"error": null
}Model Management Automation:
# List all MLX models with hashes
mlxk-json list --json | jq -r '.data.models[] | select(.framework=="MLX") | "\(.name)@\(.hash)"'
# Get model hashes for pattern matching
mlxk-json list "Qwen" --json | jq -r '.data.models[] | .hash'
# Count models by framework
mlxk-json list --json | jq '.data.models | group_by(.framework) | map({framework: .[0].framework, count: length})'
# Health summary
mlxk-json health --json | jq '.data.summary'
# Find unhealthy models
mlxk-json health --json | jq -r '.data.unhealthy[].name'
# Filter by pattern
mlxk-json list "Llama" --json | jq '.data.count'
# Model sizes with hashes
mlxk-json list --json | jq -r '.data.models[] | "\(.name)@\(.hash): \(.size_bytes)"'
# Get detailed model info
mlxk-json show "Phi-3-mini" --json | jq '.data.model'
# List all files in a model
mlxk-json show "Phi-3-mini" --files --json | jq -r '.data.files[] | "\(.name): \(.size)"'
# Extract model config
mlxk-json show "Phi-3-mini" --config --json | jq '.data.config.quantization'Automated Health Monitoring:
#!/bin/bash
# Check if any models are unhealthy
unhealthy_count=$(mlxk-json health --json | jq '.data.summary.unhealthy_count')
if [ "$unhealthy_count" -gt 0 ]; then
echo "Warning: $unhealthy_count unhealthy models found"
mlxk-json health --json | jq -r '.data.unhealthy[] | "UNHEALTHY: \(.name) - \(.reason)"'
fiBatch Operations:
# Pull multiple models
for model in "Phi-3-mini" "Llama-3.2-1B"; do
echo "Pulling $model..."
mlxk-json pull "$model" --json | jq '.data.download_status'
done
# Clean up old models
mlxk-json list --json | jq -r '.data.models[] | select(.size | test("GB")) | .name' | while read model; do
echo "Found large model: $model"
done- No implementation details: No cache paths, internal directories, or implementation specifics
- No user-specific data: No usernames in paths or environment-dependent information
- Consistent schema: All commands follow same
status/command/data/errorstructure - Scriptable output: Rich structured data optimized for
jqand automation - Backward compatible: Exit codes remain unchanged for script compatibility
All commands use consistent exit codes for scripting:
0- Success1- General error (validation, not found, etc.)2- Network/download error3- Permission/filesystem error
- 2.0.0-alpha: JSON-only implementation with
mlxk-json --json - 2.0.0-alphha.1: Full implementation with both JSON and human-readable output
- 2.0.0-alphha.2: Push function protocol extension (json-0.1.3)
{ "status": "success" | "error", "command": "list" | "show" | "health" | "pull" | "rm" | "clone" | "version" | "push" | "run" | "server", "data": { /* command-specific data */ }, "error": null | { "type": "string", "message": "string" } }