A local-first, privacy-first desktop application for analyzing and summarizing code snippets using local AI models. Built with Tauri (Rust) and React (TypeScript).
- 100% Local: All processing happens on your machine. No cloud calls, no internet required.
- Privacy-First: Your potentially sensitive code never leaves your computer.
- Secret Protection: Automatic detection and optional masking of secrets before processing.
- Offline-Capable: Works entirely offline with a local Ollama instance.
- Multiple Analysis Modes: Get summaries, junior-friendly explanations, or security risk assessments.
- Supported Languages: Java, Python, JavaScript, SQL, Visual Basic, JSON, CSS
- Analysis Modes:
- Summarize: Get a concise overview with structured breakdown
- Explain for Junior: Beginner-friendly explanations with detailed walkthroughs
- Risk Scan: Security-focused analysis highlighting potential vulnerabilities
- Secret Scanning: Detects AWS keys, JWT tokens, passwords, API keys, PEM keys, and Bearer tokens
- Structured Output: JSON-validated responses with sections for summary, walkthrough, inputs, outputs, side effects, risks, and confidence scores
- Copy Functionality: Copy any section individually to your clipboard
Before you can use Code Summarizer, you need:
- Node.js (v18 or higher) - Download
- Rust (latest stable) - Install
- Ollama - Local LLM server - Download
Download and install Ollama from ollama.ai
After installing Ollama, pull a model. Choose based on your available RAM:
# Very lightweight - ~650MB model, works on systems with 2-4GB RAM
ollama pull tinyllama
# Lightweight - ~2GB model, requires 4-6GB system RAM
ollama pull llama2
# Better quality - ~4GB model, requires 8GB+ system RAM
ollama pull codellama
# High quality - ~4.5GB model, requires 8GB+ system RAM
ollama pull mistralImportant: Each model needs ~2x its size in RAM when running. If you get memory errors, use a smaller model.
Verify Ollama is running:
ollama list# Clone the repository
git clone https://github.com/sekacorn/CodeSummarizer.git
cd CodeSummarizer
# Install frontend dependencies
npm install
# Generate app icons (first-time setup only)
npm run generate-icons
# The Rust dependencies will be handled by Tauri automaticallyNote: If icons are missing, you can regenerate them anytime with npm run generate-icons.
npm run tauri devThis will:
- Start the Vite development server
- Compile the Rust backend
- Launch the application window
npm run tauri buildThe built application will be in src-tauri/target/release/.
- Start Ollama: Make sure Ollama is running (
ollama serve) - Launch the App: Run
npm run tauri dev - Select Language: Choose the programming language of your code
- Select Model: Pick an Ollama model from the dropdown
- Configure Secret Masking: Toggle "Mask secrets before sending to model" (ON by default)
- Paste Code: Enter your code in the text area
- Choose Action:
- Click Summarize for a high-level overview
- Click Explain for Junior for beginner-friendly explanations
- Click Risk Scan for security analysis
- Review Results: The right panel will display structured analysis with copyable sections
Error: "Ollama is not running. Please start Ollama and try again."
Solution:
# Start Ollama server
ollama serveOr if Ollama is installed as a service, ensure it's running in the background.
Error: "No models found. Please pull a model using 'ollama pull '."
Solution:
# Pull a model
ollama pull llama2
# Verify it's installed
ollama listError: "model requires more system memory (X GiB) than is available (Y GiB)"
Solution: This means the model you selected is too large for your system's available RAM. Each model requires approximately 2x its download size in system memory when running.
# For systems with 2-4GB RAM
ollama pull tinyllama
# For systems with 4-6GB RAM
ollama pull llama2
# For systems with 8GB+ RAM
ollama pull codellama
ollama pull mistralAfter pulling a smaller model, select it from the dropdown in the app and try again.
Error: "Request to Ollama timed out. The model may be too large or your system may be slow."
Solutions:
- Use a smaller model (e.g.,
tinyllamaorllama2instead ofcodellama) - Ensure Ollama is configured to use GPU if available
- Close other resource-intensive applications
- Increase the timeout (requires code modification in
src-tauri/src/commands/ollama.rs)
Error: "Model 'xyz' not found. Please pull it using 'ollama pull xyz'."
Solution:
ollama pull <model-name>Issue: Model output is not valid JSON
Explanation: Sometimes models don't follow the JSON format strictly, especially smaller models.
Solutions:
- Try a more capable model (e.g.,
mistralorcodellama) - Check the "Show Raw Model Output" to see what the model returned
- Retry the analysis
If you see errors about port 1420 being in use:
- Stop any other Vite/Tauri dev servers
- Or modify the port in
vite.config.ts
The app automatically scans for:
- AWS Access Keys: Pattern
AKIA[0-9A-Z]{16} - JWT Tokens: Three base64url segments separated by dots
- Credentials: password/secret/api_key assignments
- PEM Private Keys:
-----BEGIN PRIVATE KEY-----blocks - Bearer Tokens: Authorization headers with Bearer tokens
When enabled (default), detected secrets are replaced with ***REDACTED*** before sending to the local model.
- Code is never written to disk by this application
- No telemetry or logging of your code
- All processing happens in-memory
- All model requests go exclusively to
http://127.0.0.1:11434 - No external network requests are made
code-summarizer/
├── src/ # Frontend (React + TypeScript)
│ ├── components/ # React components
│ ├── lib/ # API, schemas, utilities
│ ├── App.tsx # Main application
│ ├── main.tsx # Entry point
│ └── styles.css # Styling
├── src-tauri/ # Backend (Rust)
│ ├── src/
│ │ ├── commands/ # Tauri commands
│ │ │ ├── ollama.rs # Ollama integration
│ │ │ ├── secrets.rs # Secret scanning
│ │ │ ├── prompt.rs # Prompt templates
│ │ │ └── types.rs # Shared types
│ │ └── main.rs # Tauri entry point
│ ├── Cargo.toml # Rust dependencies
│ └── tauri.conf.json # Tauri configuration
├── package.json # Node dependencies
└── vite.config.ts # Vite configuration
- Frontend: React 18, TypeScript, Vite, Zod (schema validation)
- Backend: Rust, Tauri, reqwest (HTTP client), regex (pattern matching), serde (serialization)
- AI: Ollama (local LLM server)
The app uses a custom icon generation script. Icons are automatically generated from an SVG template:
# Generate all required icon formats
npm run generate-iconsThis creates:
- Windows:
icon.ico - macOS:
icon.icns - Linux/Web: Various PNG sizes (32x32, 128x128, etc.)
The generated icons are placed in src-tauri/icons/. The source files (app-icon.svg and app-icon.png) are temporary and excluded from git.
# Start dev server (frontend only)
npm run dev
# Start full Tauri app in dev mode
npm run tauri:dev
# Build for production
npm run tauri:build
# Generate icons
npm run generate-iconsEdit src/lib/languages.ts and add your language to SUPPORTED_LANGUAGES.
Modify src-tauri/src/commands/prompt.rs to adjust how prompts are constructed for different analysis modes.
Add regex patterns in src-tauri/src/commands/secrets.rs in the scan_for_secrets function.
This project is provided as-is for local use. Modify and distribute as needed.
Contributions are welcome! Please ensure:
- Code passes TypeScript and Rust compilation
- Secret scanning tests pass
- UI remains clean and functional
- Security-first principles are maintained
Q: Does this send my code to the internet? A: No. All processing is 100% local. The app only communicates with Ollama running on localhost (127.0.0.1).
Q: Can I use this offline? A: Yes, as long as Ollama and the required models are already installed and running locally.
Q: What models work best? A: For code analysis:
- Best on limited RAM (2-4GB):
tinyllama- Fast but basic analysis - Balanced (4-6GB):
llama2- Good quality and reasonable speed - Best quality (8GB+):
codellamaormistral- Most accurate analysis
Choose based on your available system RAM. Models need ~2x their size in memory when running.
Q: Why is the analysis slow? A: LLM inference on CPU can be slow. Consider using a GPU-accelerated setup with Ollama or using smaller models.
Q: Can I analyze large files? A: The app is designed for code snippets. Very large files may hit token limits in the model. Break them into smaller logical sections.
For issues or questions:
- Check the Troubleshooting section above
- Verify Ollama is running and models are installed
- Check the browser/dev console for errors (in dev mode)
Built with privacy and security in mind. Your code stays on your machine.


