Privacy-first AI from the command line. No browser. No tracking. Just you and the model.
The official command-line interface for Venice AI. Chat with AI models, generate images, convert text to speech, transcribe audio, and more—all from your terminal.
npm install -g veniceai-cliOr use without installing:
npx veniceai-cli chat 'Hello, world!'-
Get your API key from Venice AI Settings
-
Configure the CLI:
venice config set api_key YOUR_API_KEYOr use an environment variable:
export VENICE_API_KEY=YOUR_API_KEY -
Start chatting:
venice chat "What is the meaning of life?"
- 🤖 Chat with state-of-the-art AI models
- 🔐 End-to-End Encryption (E2EE) for maximum privacy
- 🛡️ TEE Attestation verification for trusted execution
- 🔍 Web Search with AI-powered synthesis
- 🖼️ Image Generation from text prompts
- 🔊 Text-to-Speech with 35+ voices across languages
- 🎤 Speech-to-Text transcription with timestamps
- 🎬 Video Generation (text-to-video, image-to-video)
- 📐 Embeddings generation
- 🔧 Function Calling with built-in tools
- 🎭 Character Personas for fun interactions
- 💾 Conversation History with continue mode
- 📊 Usage Tracking for token monitoring
- 🐚 Shell Completions for bash, zsh, fish
# Basic chat
venice chat "Explain quantum computing in simple terms"
# Use a specific model
venice chat -m deepseek-v3.2 "Solve this step by step: 15% of 340"
# With a system prompt
venice chat -s "You are a helpful coding assistant" "Write a fizzbuzz in Python"
# Use a character persona
venice chat -c pirate "Tell me about the weather"
# Continue the previous conversation
venice chat --continue "What about the next step?"
# With function calling
venice chat -t calculator,weather "What's 25 * 4.5?"
# JSON output for scripting
venice chat -f json "List 3 colors" | jq '.content'
# Disable streaming
venice chat --no-stream "Quick question"
# E2EE encrypted chat (auto-enabled based on model capabilities)
venice chat -m e2ee-qwen3-5-122b-a10b "This message is end-to-end encrypted"
# TEE-only mode (attestation verified, no encryption)
venice chat -m e2ee-qwen3-5-122b-a10b --no-e2ee "Verified but not encrypted"
# Show TEE attestation details
venice chat -m e2ee-qwen3-5-122b-a10b --tee-verify "Verify the secure enclave"
# Quiet mode - E2EE without status messages (looks like normal chat)
venice chat -m e2ee-qwen3-5-122b-a10b -q "This is encrypted but looks like normal chat"Options:
| Option | Description |
|---|---|
-m, --model <model> |
Model to use (default: kimi-k2-5) |
-s, --system <prompt> |
System prompt |
-c, --character <name> |
Character persona |
-t, --tools <tools> |
Comma-separated list of tools |
--interactive-tools |
Approve each tool call |
--continue |
Continue last conversation |
--no-stream |
Disable streaming output |
--web-search |
Enable web search for current information |
--no-thinking |
Disable reasoning on reasoning models |
--strip-thinking |
Strip thinking blocks from response |
--no-venice-prompt |
Disable Venice system prompts |
--search-results-in-stream |
Include search results in stream |
--e2ee |
Enable E2EE encryption (auto-enabled for models with E2EE capability) |
--no-e2ee |
Disable E2EE, use TEE-only mode (verifies attestation without encryption) |
--tee-verify |
Show TEE attestation details |
-q, --quiet |
Hide E2EE/TEE status messages (show only response) |
-f, --format <format> |
Output format (pretty|json|markdown|raw) |
# Search with AI synthesis
venice search "Latest developments in fusion energy"
# Limit results
venice search -n 10 "Best practices for TypeScript"
# Include citations in response
venice search --citations "Latest AI news"
# Enable deep web scraping
venice search --scrape "Company research on Anthropic"# Generate an image
venice image "A serene mountain lake at sunset"
# Save to a file
venice image -o sunset.png "A serene mountain lake at sunset"
# Custom dimensions
venice image -w 1024 -h 768 "Landscape photograph"
# Use a specific model
venice image -m flux-1-dev "Artistic portrait"# Upscale an image
venice upscale photo.jpg -o photo_upscaled.jpg
# 4x upscale
venice upscale photo.jpg -s 4 -o photo_4x.jpg# Generate speech
venice tts "Hello, world!"
# Custom voice and output
venice tts -v bf_emma -o greeting.mp3 "Good morning, everyone!"
# From stdin
echo "Text to speak" | venice tts -o output.mp3# Transcribe audio
venice transcribe recording.mp3
# With word/segment timestamps
venice transcribe -t recording.mp3
# Use a specific model (Whisper or Parakeet)
venice transcribe -m openai/whisper-large-v3 interview.wav
# With language hint
venice transcribe -l es spanish_audio.mp3
# JSON output
venice transcribe -f json interview.wavAvailable STT Models:
nvidia/parakeet-tdt-0.6b-v3(default, fast)openai/whisper-large-v3
Venice supports AI video generation using state-of-the-art models. Video generation is asynchronous (queue-based).
# Queue a text-to-video generation
venice video generate "A cat playing with a ball in slow motion"
# Use a specific model
venice video generate -m veo3-fast-text-to-video "Cinematic sunset over mountains"
# Image-to-video with reference image
venice video generate -m wan-2.6-image-to-video -i photo.jpg "The scene comes alive"
# Set duration and aspect ratio
venice video generate -d 10s -a 16:9 "A peaceful forest scene"
# Check status of a video job
venice video status <queue_id>
# Wait for completion (polls every 5s)
venice video status -w <queue_id>
# Download completed video
venice video retrieve <queue_id> -o my_video.mp4
# List available video models
venice video modelsAvailable Video Models:
- Wan 2.6:
wan-2.6-text-to-video,wan-2.6-image-to-video - Veo3:
veo3-fast-text-to-video,veo3-fast-image-to-video - Sora2:
sora2-text-to-video,sora2-image-to-video - Kling V3:
kling-v3-pro-text-to-video,kling-v3-pro-image-to-video - Grok Imagine:
grok-imagine-text-to-video,grok-imagine-image-to-video - LTX2:
ltx2-fast-text-to-video,ltx2-fast-image-to-video
Venice supports Trusted Execution Environment (TEE) attestation for models running in secure enclaves. This provides cryptographic proof that your data is processed in a trusted environment.
# Fetch and display TEE attestation for a model
venice tee attestation tee-qwen3-5-122b-a10b
# With verbose TDX quote details
venice tee attestation --verbose tee-qwen3-5-122b-a10b
# Run TEE attestation policy verification
venice tee verify tee-qwen3-5-122b-a10b
# Verify a response signature (requires completion ID from a previous request)
venice tee signature e2ee-qwen3-5-122b-a10b <completion-id>
# Verify signature matches expected signer address
venice tee signature e2ee-qwen3-5-122b-a10b <completion-id> --verify-signer 0x123...TEE Commands:
| Command | Description |
|---|---|
attestation <model> |
Fetch and display TEE attestation report |
verify <model> |
Run TEE attestation policy verification |
signature <model> <id> |
Fetch and verify TEE response signature |
# List all models
venice models
# Filter by type
venice models -t image
venice models -t audio
# Show only privacy-preserving models
venice models --privacy
# Show TEE-attestable models
venice models --tee
# Show E2EE-capable models
venice models --e2ee
# Search models
venice models -s llama# Generate embeddings
venice embeddings "Text to embed"
# Save to file
venice embeddings -o vectors.json "Text to embed"# Interactive setup
venice config init
# Show current config
venice config show
# Set values
venice config set api_key YOUR_KEY
venice config set default_model kimi-k2-5
venice config set default_voice af_sky
# Get a value
venice config get default_model
# Remove a value
venice config unset default_model
# Show config file path
venice config pathAvailable config keys:
| Key | Description |
|---|---|
api_key |
Your Venice API key |
default_model |
Default chat model |
default_image_model |
Default image generation model |
default_voice |
Default TTS voice |
output_format |
Default output format |
no_color |
Disable colored output |
show_usage |
Show token usage after requests |
# List recent conversations
venice history list
# Show a specific conversation
venice history show
# Clear all history
venice history clear
# Export history
venice history export history.json# Show last 7 days
venice usage
# Show today only
venice usage --today
# Show this month
venice usage --month
# Custom range
venice usage -d 30# List available characters
venice characters
# Use a character
venice chat -c wizard "What is the nature of magic?"Available characters: pirate, wizard, scientist, poet, coder, teacher, comedian, philosopher
# List available TTS voices
venice voices# Bash
venice completions bash >> ~/.bashrc
# Zsh
venice completions zsh >> ~/.zshrc
# Fish
venice completions fish > ~/.config/fish/completions/venice.fishThe CLI includes several built-in tools for function calling:
| Tool | Description |
|---|---|
calculator |
Mathematical calculations |
weather |
Weather information (simulated) |
datetime |
Current date and time |
random |
Random number/choice generation |
base64 |
Base64 encoding/decoding |
hash |
Hash generation (md5, sha256, etc.) |
# Use tools
venice chat -t calculator "What's the square root of 144?"
venice chat -t datetime "What day is it today?"
# Interactive tool approval
venice chat --interactive-tools -t calculator "Calculate 15% tip on $85"| Format | Description | Use Case |
|---|---|---|
pretty |
Colored, formatted (default) | Interactive use |
json |
Machine-readable JSON | Scripting, piping |
markdown |
Markdown formatted | Documentation |
raw |
Plain text, no decoration | Pipes, simple output |
The CLI automatically detects when output is being piped and switches to raw format.
# Explicit format
venice chat -f json "List items" | jq '.'
# Auto-detected raw format when piped
venice chat "Generate code" | pbcopyVenice CLI is designed with privacy in mind:
- End-to-End Encryption (E2EE): Messages encrypted client-side, decrypted only in the TEE—Venice cannot read your data
- TEE Attestation: Cryptographically verify that models run in secure enclaves before sending data
- No browser tracking: Terminal interactions don't expose browser metadata
- No telemetry: The CLI doesn't collect or send usage data
- Local configuration: API key stored locally with restricted permissions
- Transparent: You can see exactly what's being sent to the API
- Privacy-preserving models: Use
venice models --privacyto find models with no data retention
E2EE models provide the highest level of privacy. The CLI automatically detects E2EE support via model capabilities (not model names). When using an E2EE-capable model:
- The CLI fetches and verifies TEE attestation
- An ephemeral key pair is generated for the session
- All messages are encrypted client-side using ECDH + AES-GCM
- Only the TEE enclave can decrypt and process your data
- Responses are encrypted and decrypted client-side
# List E2EE-capable models
venice models --e2ee
# Chat with E2EE (auto-enabled based on model capabilities)
venice chat -m <e2ee-capable-model> "Your private message here"
# TEE-only mode: verify attestation without encryption
venice chat -m <e2ee-capable-model> --no-e2ee "TEE verified, not encrypted"Note: E2EE mode disables tools and web search to maintain end-to-end encryption.
TEE (Trusted Execution Environment) models run in secure enclaves with cryptographic attestation. The CLI automatically verifies attestation for models with TEE support.
# List TEE-capable models
venice models --tee
# Chat with TEE attestation verification
venice chat -m <tee-capable-model> "Verified secure execution"| Variable | Description |
|---|---|
VENICE_API_KEY |
API key (overrides config file) |
NO_COLOR |
Disable colored output |
- Node.js 18.0.0 or higher
- A Venice AI API key
# Clone the repo
git clone https://github.com/veniceai/venice-cli.git
cd venice-cli
# Install dependencies
npm install
# Build
npm run build
# Run locally
npm run dev -- chat "Hello"See CONTRIBUTING.md for guidelines.
MIT © Venice AI
Made with ❤️ for privacy-conscious developers.