Local-first Research OS for papers, workflows, experiments, channels, and automation.
English | 中文 | Docs | Roadmap | Research-Equality Ecosystem
Part of the Research-Equality ecosystem for AI-native research workflows.
Persistent research state · Multi-agent runtime · Skills + MCP · Automation + channels
Why ResearchClaw • Quick Start • Research-Equality Ecosystem • What You Get Today • Docs
ResearchClaw is the runtime and workspace layer of the broader Research-Equality stack. It keeps long-horizon research work durable: projects, workflows, claims, evidence, experiments, artifacts, reminders, channels, and automation all live in one local-first system instead of dissolving across chat threads, terminals, and scattered folders.
| Common AI research workflow pain | What ResearchClaw does instead |
|---|---|
| Research work disappears into one-off chats or shell history | Persists project -> workflow -> task -> artifact state with notes, claims, evidence, drafts, and reminders |
| Tools for search, execution, writing, and follow-up are split apart | Puts console, automation, channels, APIs, papers, experiments, and memory into one runtime |
| It is hard to hand work over between web, terminal, and messaging surfaces | Exposes the same research state through the web console, IM channels, cron jobs, sessions, and control-plane APIs |
| Skills, providers, and external tools are glued together ad hoc | Standardizes SKILL.md, MCP, provider routing, fallback chains, and per-agent workspace rules |
Under the hood, the current codebase combines:
- a long-running app runtime with control-plane APIs
- a web console for chat, papers, research, channels, sessions, cron jobs, models, skills, workspace, environments, and MCP
- multi-agent routing with per-agent workspaces and binding rules
- a persistent research state layer for projects, workflows, tasks, notes, claims, evidence, experiments, artifacts, and drafts
- built-in channels for
console,telegram,discord,dingtalk,feishu,imessage,qq, andvoice - model/provider management with multiple providers, multiple models per provider, and fallback chains
- standard
SKILL.mdsupport, Skills Hub search/install APIs, MCP client management, and custom channels - automation triggers, cron jobs, heartbeat, proactive reminders, and runtime observability
- paper search/download, BibTeX utilities, LaTeX helpers, data analysis, browser/file tools, and structured research memory
It is still an Alpha project, but it is no longer just a platform shell. The code now includes a minimal research workflow runtime, claim/evidence graph, experiment tracking, blocker remediation, and project dashboard. The biggest remaining gaps are evidence-matrix quality, stronger claim-evidence validation, richer external execution adapters, and submission/reproducibility packaging.
git clone https://github.com/MingxinYang/ResearchClaw.git
cd ResearchClaw
pip install -e .researchclaw init --defaults --accept-securityThis creates:
- working dir:
~/.researchclaw - secret dir:
~/.researchclaw.secret - bootstrap Markdown files such as
SOUL.md,AGENTS.md,PROFILE.md, andHEARTBEAT.md
researchclaw models configOr add one directly:
researchclaw models add openai --type openai --model gpt-5 --api-key sk-...Supported provider types in code today: openai, anthropic, gemini, ollama, dashscope, deepseek, minimax, other, custom.
researchclaw app --host 127.0.0.1 --port 8088Open http://127.0.0.1:8088.
If the page says Console not found, build the frontend once:
cd console
npm install
npm run buildThe backend automatically serves console/dist when it exists.
After startup, open the Research page in the console to:
- create a project
- inspect workflows, claims, and reminders
- view execution health and recent blockers
- dispatch, execute, or resume remediation work
ResearchClaw works best as the persistent runtime and workspace in a larger skill ecosystem. The companion repositories below cover stage-specific research work, while the awesome repository maps the wider AI scientist and AI-for-research landscape.
Browse the full organization here: Research-Equality repositories
| Repository | Role next to ResearchClaw | Use it when |
|---|---|---|
| RE-idea-generation | authoritative skills for idea generation, problem discovery, and direction exploration | you need to turn vague interests into defensible research directions |
| RE-literature-discovery | authoritative skills for literature discovery, authority-aware ranking, evidence synthesis, and survey writing | you want auditable paper search, filtering, and review workflows |
| RE-research-design | authoritative skills for research design, method formalization, experiment planning, and evaluation design | you need a stronger design layer before implementation starts |
| RE-experiment | authoritative skills for experiment planning, implementation, validation, and analysis | you are reproducing baselines, running ablations, or tightening experiment traceability |
| RE-paper-writing | authoritative skills for paper planning, drafting, revision, LaTeX workflows, and submission QA | you want the writing and submission stack to stay connected to real artifacts |
| awesome-ai-scientists | the Awesome-AI-Research landscape map for AI-native research systems, workflow modules, benchmarks, surveys, datasets, and meta-resources |
you want a broader map of AI scientist systems and AI research tooling beyond this project |
A practical pairing is ResearchClaw plus one or two RE-* repositories for the stage you are actively pushing, with awesome-ai-scientists as the discovery layer for adjacent tools, systems, and benchmarks.
- FastAPI app with
/api/health,/api/version,/api/control/*,/api/automation/*,/api/providers,/api/skills,/api/mcp,/api/workspace, and more - gateway-style runtime bootstrapping for runner, channels, cron, MCP, automation store, and config watcher
- runtime status snapshots for agents, sessions, channels, cron, heartbeat, skills, automation runs, and research services
- project abstraction with persistent
project -> workflow -> task -> artifactrelationships - workflow stages for
literature_search,paper_reading,note_synthesis,hypothesis_queue,experiment_plan,experiment_run,result_analysis,writing_tasks, andreview_and_followup - structured notes including paper notes, idea notes, experiment notes, writing notes, and decision logs
- claim/evidence graph that can link papers, notes, experiments, PDF chunks, citations, generated tables, and artifacts
- experiment tracking with execution bindings, heartbeat/result ingestion, contract validation, result bundle validation, and compare APIs
- proactive workflow reminders plus remediation tasks for missing metrics, outputs, or artifact types
- project dashboards and blocker panels, including batch dispatch/execute/resume actions in the console and APIs
Built-in tools registered by the agent include:
semantic_scholar_searchbibtex_search,bibtex_add_entry,bibtex_exportlatex_template,latex_compile_checkdata_describe,data_queryrun_shell,read_file,write_file,edit_file,append_filebrowse_url,browser_use,send_file,memory_searchskills_list,skills_activate,skills_read_file
Bundled skills currently shipped in src/researchclaw/agents/skills/ include:
arxivbrowser_visiblecitation_networkcrondingtalk_channeldocxexperiment_trackerfigure_generatorfile_readerhimalayaliterature_reviewnewspaper_summarizerpdfpptxresearch_notesresearch_workflowsxlsx
Runtime data lives under the working directory, while secrets are stored separately:
~/.researchclaw/
├── config.json
├── jobs.json
├── chats.json
├── research/
│ └── state.json
├── sessions/
├── active_skills/
├── customized_skills/
├── papers/
├── references/
├── experiments/
├── memory/
├── md_files/
├── custom_channels/
└── researchclaw.log
~/.researchclaw.secret/
├── envs.json
└── providers.json
Provider credentials and persisted environment variables are intentionally kept out of the working directory.
Backend checks:
pip install -e ".[dev]"
PYTHONPATH=src pytest -qConsole build:
npm --prefix console run buildWebsite build:
corepack pnpm --dir website run buildRepo-wide helper:
scripts/check-ci.sh --skip-installMain documentation files in this repository:
- Intro
- Quick start
- Deployment
- Console
- Channels
- Skills
- MCP
- Memory
- Config and working dir
- Commands
- CLI
- Heartbeat
- Community
- Contributing
- FAQ
- Roadmap
The current codebase is best described as:
- already strong on runtime infrastructure, control plane, channels, and provider/skill compatibility
- already usable for persistent research projects, workflow execution, experiment tracking, claim/evidence linking, and blocker handling
- still incomplete as a full autonomous research platform: evidence-matrix quality, rigorous validators, deeper execution backends, and submission packaging remain ahead