OpenCode-native port of wanshuiyin/Auto-claude-code-research-in-sleep at upstream commit e8ab30fdd01cfce03bd1695de9943f629849b792.
This repository is a derivative packaging of the original ARIS work by the upstream authors at wanshuiyin/Auto-claude-code-research-in-sleep.
- Original project: wanshuiyin/Auto-claude-code-research-in-sleep
- Upstream snapshot used here:
e8ab30fdd01cfce03bd1695de9943f629849b792 - Original license: MIT, preserved in LICENSE
- Port-specific changes in this repo: OpenCode command wrappers, OpenCode config scaffolding, repo-level
AGENTS.md, and compatibility edits for OpenCode paths
See NOTICE.md and UPSTREAM.md for the exact provenance.
This repo keeps the upstream research skills, ports the few Claude-specific file paths that would break in OpenCode, and adds route-aware workflow commands in .opencode/commands/.
.opencode/skills/— 18 upstream research skills copied from ARIS.opencode/commands/— OpenCode command wrappers, including explicit OpenCode and Codex route variants for the top-level workflows- port-native additions —
paper-upgradefor linked-paper improvement,biowulf-gpufor one-node Biowulf GPU allocation, module setup, and scratch-disk staging, andclassic-biology-prosefor natural paper writing AGENTS.md— repo-level instructions for using the port in OpenCodeopencode.jsonc— sample OpenCode-native model and optional MCP configurationtemplates/project-AGENTS.md— project metadata template for GPU servers, paper libraries, and paper defaultsWORKFLOW_ROUTES.md— route map for pure Codex versus pure OpenCode executionUPSTREAM.md— upstream source snapshot and provenance reference
This repo now supports two explicit execution routes:
- Pure Codex — default from this point forward
- Pure OpenCode — opt-in when you explicitly want the OpenCode-native path
Read WORKFLOW_ROUTES.md for the exact command surface.
- Decide your route:
- default: Pure Codex
- optional: Pure OpenCode
- If you want the pure OpenCode route, open this folder in OpenCode and review opencode.jsonc. Optional
zoteroandobsidian-vaultentries remain disabled until you configure them. - If you want GPU execution or local paper-library lookup in another repo, copy templates/project-AGENTS.md into that project as
AGENTS.mdand fill in the relevant sections. - Run one of these top-level workflows:
- Codex direct:
Run the Codex research pipeline for: test-time adaptation for robotics - OpenCode generic command, still routed to Codex by default:
/research-pipeline test-time adaptation for robotics - OpenCode explicit:
/research-pipeline-opencode test-time adaptation for robotics - Codex explicit:
/research-pipeline-codex test-time adaptation for robotics - Codex direct paper route:
Run the Codex paper-writing workflow for: NARRATIVE_REPORT.md - OpenCode generic paper command, still routed to Codex by default:
/paper-writing NARRATIVE_REPORT.md - OpenCode explicit paper route:
/paper-writing-opencode NARRATIVE_REPORT.md - OpenCode generic paper-upgrade command, still routed to Codex by default:
/paper-upgrade "https://arxiv.org/abs/2401.12345 — this is my paper"
- Codex direct:
- For the shortest setup path, read QUICKSTART.md.
- For the narrative walkthrough, read AUTO_RESEARCH_GUIDE.md.
Paper prose in this repo now defaults to classic-biology-prose.
That means the writing workflow aims for:
- the clarity of Monod and Crick
- the narrative movement of Jacob
- the personality of Brenner
- the reflective cadence of Stent
In practice, the paper should open with the scientific or historical question, state the central point early, avoid machine-sounding workflow prose, and end with a closing paragraph that feels reflective rather than canned.
Just as important, the default is not project-report prose. Papers in this repo should not read like build logs, revision memos, homework notes, or annotated workflow transcripts. They should read like authoritative papers written by serious researchers.
If a venue or project needs a different voice, you can still override it explicitly in the prompt or in the target project's AGENTS.md.
Paper-improvement loops now prefer paperreview.ai when it is configured.
- Set
PAPERREVIEW_EMAILor add a projectAGENTS.md## External Reviewsection with the submission email. - The workflow saves the returned token locally and polls the review endpoint, so it does not depend on email delivery to continue.
- The email is part of submission and optional notification. After submission, the saved token is enough to retrieve the review.
- Current service limits: English PDFs only, max
10MB, first15pages analyzed. - The site currently exposes its calibrated numeric score only for
ICLR. - If the service is unavailable or unsuitable for the paper, the workflow falls back to the route-local reviewer and records that fallback in local artifacts.
In this repo, a paper is only considered a complete final paper package when it includes:
- reported results generated by real executable computation
- the finished paper PDF and source
- the round-by-round paper-improvement history driven by fresh reviewer passes
- all code used to build or support the paper
- sample data, or an explicit local manifest linking the source data and literature inputs
- a dedicated folder of high-resolution figure assets
- a detailed review opinion
- a score
The compiled PDF is necessary, but it is not sufficient on its own. If the work is too large for local compute, use Biowulf for the serious experimental runs and package the resulting code, artifacts, and provenance locally with the paper.
The prose standard matters too. A final paper in this repo should read like a serious human paper, not like a system report that happens to compile.
If you want to see a concrete sample artifact set before running anything yourself, start with examples/end-to-end-sample/README.md.
The intended one-command end state of /research-pipeline is now: IDEA_REPORT.md + AUTO_REVIEW.md + NARRATIVE_REPORT.md + paper/main.pdf + paper/PAPER_IMPROVEMENT_LOG.md + review/REVIEW_OPINION.md + review/review_scorecard.json.
If you want a tracked sample that ends in a complete paper package, read examples/full-paper-sample/README.md.
If you want the linked-paper upgrade workflow shape, read examples/paper-upgrade-sample/README.md.
- Upstream
CLAUDE.mdreferences were changed toAGENTS.md. - Upstream
~/.claude/feishu.jsonreferences were changed to~/.config/opencode/feishu.json. - The original upstream workflow used a separate reviewer path. In this repo, the generic default route is now Codex, while a pure OpenCode route remains available by explicit choice.
- The upstream repo did not ship actual command files. The command wrappers here are new and map one-to-one to the upstream workflow/skill names.
The pure OpenCode route does not require any reviewer MCP server. Optional integrations:
zotero— literature search over a Zotero libraryobsidian-vault— note search over an Obsidian vault
zotero and obsidian-vault remain scaffolded but disabled by default.
For the broader story of what we learned while getting this project into reliable shape, read docs/technical/project-setup-stories.md.
- CONTRIBUTING.md — contribution expectations for docs, skills, and workflow changes
- SUPPORT.md — how to ask for help effectively
- SECURITY.md — how to report security-sensitive issues
- GitHub issue templates and the PR template live under
.github/
Source repo: wanshuiyin/Auto-claude-code-research-in-sleep
Reference snapshot: UPSTREAM.md