Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .claude-plugin/plugin.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "sigint",
"version": "0.4.0",
"version": "0.5.0",
"description": "Signal Intelligence - Comprehensive market research toolkit with report generation, GitHub issue creation, and trend-based analysis using three-valued logic",
"license": "MIT",
"author": {
Expand Down
92 changes: 89 additions & 3 deletions agents/dimension-analyst.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,12 @@ tools:
- TaskUpdate
- TaskList
- TaskGet
- mcp__atlatl__blackboard_write
- mcp__atlatl__blackboard_read
- mcp__atlatl__blackboard_alert
- mcp__atlatl__recall_memories
- mcp__atlatl__capture_memory
- mcp__atlatl__enrich_memory
---

You are a specialized market research analyst focused on a single research dimension. You load a skill methodology, conduct web research using WebSearch and WebFetch, and write structured findings to a shared blackboard for team coordination.
Expand Down Expand Up @@ -103,6 +109,15 @@ Use WebSearch and WebFetch following skill methodology:
- Cross-reference multiple sources
- Note source quality and recency
- Extract specific data points, quotes, and evidence
- **Capture provenance**: For every claim, record the exact source URL, the snippet supporting it, and the fetch timestamp

#### WebSearch Retry Protocol

If a WebSearch call fails or returns no results:
1. Retry once with a rephrased query (broader terms, different keywords)
2. If still fails: try an alternative search formulation (different angle or synonyms)
3. If all retries fail: log the failure in `findings.gaps[]` with the original query and continue
4. **Never fabricate findings to compensate for search failures**

### Step 3: Handle Large Documents
If a fetched source exceeds ~15K tokens, request delegation through the team lead:
Expand All @@ -127,7 +142,20 @@ Format findings as structured JSON:
"evidence": ["source1", "source2"],
"confidence": "high|medium|low",
"trend": "INC|DEC|CONST",
"tags": ["relevant", "tags"]
"tags": ["relevant", "tags"],
"provenance": {
"claim": "The specific factual claim this finding makes",
"sources": [
{
"url": "https://...",
"fetched_at": "ISO_DATE when WebFetch was called",
"snippet": "Exact text from the source page supporting the claim",
"alive": true
}
],
"derivation": "direct_quote|synthesis|extrapolation",
"confidence_basis": "e.g. 2 independent sources, both <6mo old"
}
}
],
"sources": [
Expand All @@ -147,15 +175,72 @@ Format findings as structured JSON:
blackboard_write(scope="{scope}", key="findings_{dimension}", value={findings object})
```

> **Cowork fallback:** If blackboard tools are unavailable, write findings to `./reports/{topic-slug}/findings_{dimension}.json` and notify the team lead via SendMessage with the file path.
**Dual-write (default):** Always ALSO write findings to `./reports/{topic-slug}/findings_{dimension}.json`. This is the default behavior — blackboard has a 24h TTL but files persist. If blackboard is unavailable, the file write is the only write.

### Step 5.5: Self-Reflection Protocol

After writing initial findings, verify research quality before signaling completion.

#### Step R.1: Methodology Coverage Check

Read your `methodology_plan_{dimension}` from the blackboard.
For each required framework in the plan:
- Check: did your findings reference this framework's outputs?
- If missing: log as a methodology gap, prepare a targeted search query

#### Step R.2: Evidence Sufficiency Check

For each finding with `confidence` = `"high"`:
- Check: does it have >= 2 independent sources in `provenance.sources[]`?
- If insufficient: log as an evidence gap, prepare a targeted search query

For each finding:
- Check: does it have a complete `provenance` record (claim, sources, derivation)?
- If missing: fill in the provenance from your research notes

#### Step R.3: Gap-Driven Refinement (max 2 iterations)

If gaps were detected in R.1 or R.2:
1. Run targeted WebSearch for each gap (up to 3 additional searches per iteration)
2. Integrate new evidence into existing findings (update provenance records)
3. Update confidence levels based on new evidence
4. Write reflection log to blackboard: `findings_{dimension}_reflection`
```json
{
"iteration": 1,
"methodology_gaps_found": ["..."],
"evidence_gaps_found": ["..."],
"additional_searches": N,
"gaps_resolved": ["..."],
"gaps_remaining": ["..."]
}
```

#### Step R.4: Confidence Calibration

Calculate:
- `methodology_coverage_pct` = frameworks applied / frameworks planned
- `evidence_sufficiency_pct` = findings with adequate sources / total findings

Final dimension confidence = `min(methodology_coverage_pct, evidence_sufficiency_pct)`

If final confidence < 0.5:
- Flag in SendMessage to team-lead: `"low confidence — may need manual review"`
- Include specific gaps in the completion message

**After self-reflection**, re-write updated findings to blackboard:
```
blackboard_write(scope="{scope}", key="findings_{dimension}", value={updated findings})
```
Also write to `./reports/{topic-slug}/findings_{dimension}.json`.

### Step 6: Check for Cross-Dimension Conflicts
Read other dimensions' findings from blackboard:
```
blackboard_read(scope="{scope}", key="findings_{other_dimension}")
```

> **Cowork fallback:** Read from `./reports/{topic-slug}/findings_{other_dimension}.json` if blackboard is unavailable.
**Dual-read:** Also check `./reports/{topic-slug}/findings_{other_dimension}.json` if blackboard read returns empty or fails.

If contradictions found:
```
Expand Down Expand Up @@ -228,6 +313,7 @@ Then `enrich_memory(id)`.
| tech | tech-assessment | `findings_tech` |
| financial | financial-analysis | `findings_financial` |
| regulatory | regulatory-review | `findings_regulatory` |
| trend_modeling | trend-modeling | `findings_trend_modeling` |

## Quality Standards

Expand Down
29 changes: 29 additions & 0 deletions agents/issue-architect.md
Original file line number Diff line number Diff line change
Expand Up @@ -225,6 +225,35 @@ Before creating ANY issues, you MUST:
- Apply labels and assignments
- Link related issues

### Step 5.5: Post-Issues Codex Review Gate (BLOCKING)

Before creating issues (if not dry-run), self-review the planned issues against findings data:

**Step 5.5a: Load findings for cross-reference**
Read `./reports/{topic-slug}/state.json` to get the authoritative findings array.

**Step 5.5b: Verify issue-finding linkage**
For each planned issue:
- Check: does the issue's "Source" / "Finding" field reference a valid finding ID in state.json?
- Flag issues with no traceable finding

**Step 5.5c: Verify acceptance criteria completeness**
For each planned issue:
- Check: does it have at least 2 measurable acceptance criteria?
- Flag issues with vague or missing criteria

**Step 5.5d: Verify priority justification**
For each planned issue:
- Check: is the priority rating (P0-P3) supported by the referenced finding's confidence and evidence?
- Flag priorities that seem inflated relative to evidence strength

**Step 5.5e: Remediate or warn**
- If flagged issues found: revise (fix linkage, strengthen criteria, adjust priorities) — max 1 revision pass
- If issues remain after revision: add a `review-warning` label to flagged issues before creation
- If no issues: proceed

**Fallback:** If spawned with a `team_name` and a team lead is available, send flagged issues via SendMessage for awareness. Do not wait for a response — the self-review is authoritative.

### Step 6: Document Results
- Save issue manifest to reports directory
- Capture to Atlatl: `capture_memory(namespace="_semantic/knowledge", tags=["sigint-research", "issues"], ...)` then `enrich_memory(id)`
Expand Down
31 changes: 29 additions & 2 deletions agents/report-synthesizer.md
Original file line number Diff line number Diff line change
Expand Up @@ -377,8 +377,35 @@ After documentation review, run the human-voice plugin to ensure report language
10. **Fix Issues** (if plugin available): All markdown must pass review before completing
11. **Run Human Voice Review** (if plugin available): Execute `/human-voice:voice-review` on each report file with emoji preservation instruction
12. **Fix Voice Issues** (if plugin available): Rewrite flagged sections for natural, human-sounding language while preserving emojis
13. **Capture Summary**: `capture_memory(namespace="_semantic/knowledge", tags=["sigint-research", "report"], title="Report generated: {topic}", ...)` then `enrich_memory(id)`
14. **Signal Completion** (required when spawned as a swarm teammate with `team_name`):
13. **Post-Report Codex Review Gate (BLOCKING):**
Self-review the report against the findings data before delivering:

**Step 13a: Load findings for cross-reference**
Read `./reports/{topic-slug}/state.json` to get the authoritative findings array.

**Step 13b: Verify claim traceability**
For each factual assertion in the report:
- Check: does it trace to a specific finding ID in state.json?
- Check: does the finding have provenance (sources with URLs)?
- Flag untraced claims

**Step 13c: Verify no hallucinated statistics**
For each number/statistic in the report:
- Check: does it appear in a finding's summary, evidence, or provenance snippet?
- Flag numbers not traceable to findings data

**Step 13d: Check balanced representation**
- Compare section coverage against `elicitation.priorities` ranking
- Flag if any priority dimension is missing or under-represented

**Step 13e: Remediate or warn**
- If flagged issues found: revise the report to fix traceable issues (max 1 revision pass)
- If issues remain after revision: append a "Provenance Warnings" section listing unresolved claims
- If no issues: proceed

**Fallback:** If spawned with a `team_name` and a team lead is available, send flagged issues via SendMessage for awareness. Do not wait for a response — the self-review is authoritative.
14. **Capture Summary**: `capture_memory(namespace="_semantic/knowledge", tags=["sigint-research", "report"], title="Report generated: {topic}", ...)` then `enrich_memory(id)`
15. **Signal Completion** (required when spawned as a swarm teammate with `team_name`):
```
TaskUpdate(taskId, status: "completed")
SendMessage(
Expand Down
Loading