π₯· Free, open-source AI text humanizer β corpus-trained on 10,000 Q1 academic papers. 13 AI providers, 4 rewrite levels, multi-pass ninja mode. No login, no limits, 100% client-side.
Live app: https://stealthhumanizer.vercel.app/ Β· Docs: https://rudra496.github.io/StealthHumanizer/
- Features
- How It Beats AI Detectors
- Architecture
- Installation
- Quickstart
- Configuration
- Usage Examples
- Testing and Local Development
- Benchmarks and Performance
- Roadmap
- Contributing
- Security, Support, and License
- Corpus-trained humanization engine built from 10,000 Q1 open-access academic papers spanning 11 domains (2018β2025).
- Dynamic detection thresholds calibrated against real human writing patterns, not guesswork β sentence length, burstiness, vocabulary diversity, and transition frequency.
- Expanded AI phrase database with 150+ collocation replacements for natural output.
- Domain-aware style matching across 11 academic disciplines.
- 13 AI provider support with configurable API keys (free and paid).
- 4 rewrite levels including multi-pass ninja mode for maximum transformation.
- 13 preset writing tones and granular style controls.
- Integrated AI detection with corpus-calibrated heuristic scoring.
- Readability analysis (Flesch-Kincaid, Gunning Fog).
- PDF and DOCX file upload support.
- Grammar check integration.
- Multi-language humanization support.
- Side-by-side workflow for source, output, and quality feedback.
- Browser-first key handling β all API keys stay on your device.
- Dark/light theme toggle.
StealthHumanizer uses a multi-layer approach grounded in real data, not heuristics:
- LLM rewrite β your chosen provider transforms the text using a corpus-aware prompt injected with statistical targets from 10,000 real Q1 papers.
- Corpus-aware post-processing β an expanded collocation engine replaces 150+ known AI-signature phrases with natural alternatives.
- Detection calibration β the built-in detector scores output against dynamic thresholds derived from real human writing (sentence length mean 20.5, burstiness 0.426, vocabulary diversity 69.4%, passive voice 18.1%).
The result is text that doesn't just avoid AI patterns β it matches human writing patterns measured from actual published research.
High-level architecture:
- UI layer (
components/,app/page.tsx) β text entry, settings, and result rendering. - API routes (
app/api/) β provider orchestration and rewrite workflows. - Style model layer (
public/corpus-style-model.json) β corpus statistics and calibrated thresholds loaded client-side from 10,000 Q1 papers. - Core logic (
lib/) β prompt construction (with corpus-aware injection), provider abstraction, detector scoring, and storage helpers. - Research and evaluation scripts (
scripts/,data/) β benchmark, training, and corpus ingestion pipelines. - Documentation (
docs/) β user and contributor guides published via GitHub Pages.
For deeper technical details, see ARCHITECTURE.md and STYLE_ENGINE.md.
- Node.js 20+
- npm 10+
git clone https://github.com/rudra496/StealthHumanizer.git
cd StealthHumanizer
npm ciRun the application locally:
npm run devThen open http://localhost:3000, add a provider API key in settings, and run a
rewrite.
StealthHumanizer is configured primarily through UI controls and local browser storage.
- Provider keys: configured in app settings and stored locally.
- Rewrite strategy: choose level, style, tone, and target score.
- Research pipeline scripts: use JSON configs under
data/papers/*.config.example.jsonanddata/models/*.config.example.json.
See docs/configuration.md for full details.
- Paste AI-generated text.
- Select rewrite level, style, and tone.
- Run humanization.
- Review detector/readability scores and iterate.
npm run pipeline:q1-readyFaster reruns (skip reinstall):
npm run pipeline:q1-ready:skip-installThe wrapper (scripts/papers/complete-ready-pipeline.mjs) runs:
npm ci(unless--skip-install)node scripts/papers/batch-download-and-train.mjswith Q1 OA configsnode scripts/model/evaluate-framework.mjs --manifest data/models/current/run.manifest.json
npm run papers:benchmark -- --config data/papers/benchmark.smoke.config.json --run-id local-smoke
npm run model:train -- --config data/models/train.smoke.config.json --run-id local-smoke
npm run model:eval -- --manifest data/models/current/run.manifest.jsonnpm run lint
npm run test:integration
npm run buildSee CONTRIBUTING.md for workflow standards.
See ROADMAP.md for release milestones and planned improvements.
Contributions are welcome. Please review CONTRIBUTING.md before opening a pull request.
Rudra Sarker β 3rd-year IPE student at SUST, Bangladesh. Building open-source tools for accessibility, education, and developer productivity.
- Security policy: SECURITY.md
- Support channels: SUPPORT.md
- Code of Conduct: CODE_OF_CONDUCT.md
- License: MIT