Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .github/repo-metadata.json
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
{
"description": "Market intelligence toolkit for Claude Code. Comprehensive research workflows with trend modeling, competitive analysis, TAM/SAM/SOM sizing, and executive report generation. Converts findings to GitHub issues.",
"apply_command_note": "MANUAL ONLY — do not auto-execute. Copy-paste to terminal after review.",
"topics": [
"claude-code-plugin",
"market-research",
Expand Down
3 changes: 2 additions & 1 deletion .github/workflows/dependabot-automerge.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,5 +16,6 @@ permissions:

jobs:
automerge:
uses: zircote/.github/.github/workflows/reusable-dependabot-automerge.yml@main
if: github.actor == 'dependabot[bot]'
uses: zircote/.github/.github/workflows/reusable-dependabot-automerge.yml@f3caa0d0356e297cf232fbf3439398402e4582d9 # pin to main SHA
secrets: inherit
9 changes: 7 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,11 @@ Thumbs.db
.idea/
.vscode/

# Local configuration (contains user-specific settings)
.claude/sigint.local.md
# Sigint local config (contains user-specific settings)
sigint.config.json
*-autonomous/

# Secrets and backups
.env
*.env
*.bak
30 changes: 28 additions & 2 deletions SECURITY.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@

| Version | Supported |
| ------- | ------------------ |
| 0.5.x | :white_check_mark: |
| 0.1.x | :white_check_mark: |

## Security Considerations
Expand All @@ -15,13 +16,38 @@ sigint is a Claude Code plugin that performs web searches and fetches external c
- **Atlatl Memory**: Optional memory persistence via Atlatl MCP server
- **GitHub Integration**: Issue creation requires `gh` CLI authentication

## Threat Model

### Prompt Injection via Web Content
sigint fetches arbitrary web content and passes it to LLM agents. Malicious web pages could embed instructions in their content. Mitigations:
- Web-scraped content wrapped in `<untrusted_data>` XML delimiters in all codex review gate prompts
- Codex review gates verify findings independently
- Dual-write pattern ensures findings are persisted before review

### Prompt Injection via User Input
User-supplied arguments (`$ARGUMENTS`) are interpolated into agent prompts. Mitigations:
- Input sanitized: truncated to 200 chars, backticks and angle brackets stripped
- User input wrapped in `<user_input>` XML tags in agent prompts

### Supply Chain
- GitHub Actions workflows pin reusable workflows to SHA, not mutable tags
- Dependabot automerge restricted to `dependabot[bot]` actor only

## In-Scope Categories

- **Prompt injection** — via web content, user input, or state.json manipulation
- **Supply chain** — compromised GitHub Actions, dependency confusion
- **Config injection** — malicious sigint.config.json values that escape into shell or agent prompts
- **Data exfiltration** — findings or memory data leaking to unintended destinations

## Reporting a Vulnerability

If you discover a security vulnerability:

1. **Do not** open a public issue
2. Email: security@zircote.com
3. Include:
2. **Preferred**: Use [GitHub Security Advisories](https://github.com/zircote/sigint/security/advisories/new) for encrypted reporting
3. **Alternative**: Email security@zircote.com
4. Include:
- Description of the vulnerability
- Steps to reproduce
- Potential impact
Expand Down
70 changes: 35 additions & 35 deletions agents/dimension-analyst.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,24 +25,24 @@ description: |
model: inherit
color: yellow
tools:
- Read
- Write
- Grep
- Glob
- WebSearch
- WebFetch
- Skill
- Grep
- Read
- SendMessage
- Skill
- TaskCreate
- TaskUpdate
- TaskList
- TaskGet
- mcp__atlatl__blackboard_write
- mcp__atlatl__blackboard_read
- TaskList
- TaskUpdate
- WebFetch
- WebSearch
- Write
- mcp__atlatl__blackboard_alert
- mcp__atlatl__recall_memories
- mcp__atlatl__blackboard_read
- mcp__atlatl__blackboard_write
- mcp__atlatl__capture_memory
- mcp__atlatl__enrich_memory
- mcp__atlatl__recall_memories
---

You are a specialized market research analyst focused on a single research dimension. You load a skill methodology, conduct web research using WebSearch and WebFetch, and write structured findings to a shared blackboard for team coordination.
Expand All @@ -53,17 +53,17 @@ You are a specialized market research analyst focused on a single research dimen

## MANDATORY: Methodology Gating Protocol

### Step 0: Read Elicitation
### Step 1: Read Elicitation
**Read elicitation from blackboard:**
```
blackboard_read(scope="{scope}", key="elicitation")
```
If no blackboard exists (standalone augment or Cowork without Atlatl), read from `./reports/*/state.json`.

### Step 1: Load Skill Methodology — REQUIRED
### Step 2: Load Skill Methodology — REQUIRED
Read `skills/{skill-directory}/SKILL.md` for your dimension's research methodology. This is **not optional** — you must load your skill before proceeding.

### Step 2: Extract Required Frameworks
### Step 3: Extract Required Frameworks
Extract the "## Required Frameworks" table from the loaded skill. Build a methodology plan object:
```json
{
Expand All @@ -76,34 +76,34 @@ Extract the "## Required Frameworks" table from the loaded skill. Build a method
}
```

### Step 3: Write Methodology Plan to Blackboard
### Step 4: Write Methodology Plan to Blackboard
```
blackboard_write(scope="{scope}", key="methodology_plan_{dimension}", value={methodology plan object})
```

> **Cowork fallback:** If blackboard tools are unavailable, write the methodology plan to a per-dimension file, e.g. `./reports/{topic-slug}/methodology_plan_{dimension}.json`, instead of a shared `blackboard.json`.
> **Cowork fallback:** If blackboard tools are unavailable, write the methodology plan to a per-dimension file, e.g. `./reports/{topic_slug}/methodology_plan_{dimension}.json`, instead of a shared `blackboard.json`.

After writing, report to user what frameworks will be applied:
"{dimension} analyst: Loading methodology — {N} frameworks planned: {framework names}"

### Step 4: Proceed to Research
**ONLY AFTER Step 3 succeeds**, proceed to web research. If Step 3 fails, retry once. If still fails, alert team-lead and proceed with best-effort research noting "methodology plan not written".
### Step 5: Proceed to Research
**ONLY AFTER Step 4 succeeds**, proceed to web research. If Step 4 fails, retry once. If still fails, alert team-lead and proceed with best-effort research noting "methodology plan not written".

### Step 5: Recall Prior Memories
### Step 6: Recall Prior Memories
```
recall_memories(query="sigint {topic} {dimension}", tags=["sigint-research"])
```

## Research Flow

### Step 1: Plan Research
### Step 7: Plan Research
Based on elicitation scope and skill methodology, plan your research queries.
Prioritize based on:
- `elicitation.priorities` ranking
- `elicitation.scope` boundaries (geography, segments, time horizon)
- `elicitation.hypotheses` to test

### Step 2: Conduct Web Research
### Step 8: Conduct Web Research
Use WebSearch and WebFetch following skill methodology:
- Search for current data (last 12 months preferred)
- Cross-reference multiple sources
Expand All @@ -119,15 +119,15 @@ If a WebSearch call fails or returns no results:
3. If all retries fail: log the failure in `findings.gaps[]` with the original query and continue
4. **Never fabricate findings to compensate for search failures**

### Step 3: Handle Large Documents
### Step 9: Handle Large Documents
If a fetched source exceeds ~15K tokens, request delegation through the team lead:
1. SendMessage(to: 'team-lead', message: {type: 'source_chunking_request', url: '{url}', dimension: '{dimension}', token_estimate: N, extraction_focus: '{what to extract}'}, summary: '{dimension}: requesting source chunking for large document')
2. Wait for team-lead to respond with chunked findings via SendMessage
3. Integrate received findings into your analysis

**Note:** You cannot spawn sub-agents. Large document processing is coordinated through the team lead, who manages the source-chunker agent.

### Step 4: Structure Findings
### Step 10: Structure Findings
Format findings as structured JSON:
```json
{
Expand Down Expand Up @@ -170,14 +170,14 @@ Format findings as structured JSON:
}
```

### Step 5: Write to Blackboard
### Step 11: Write to Blackboard
```
blackboard_write(scope="{scope}", key="findings_{dimension}", value={findings object})
```

**Dual-write (default):** Always ALSO write findings to `./reports/{topic-slug}/findings_{dimension}.json`. This is the default behavior — blackboard has a 24h TTL but files persist. If blackboard is unavailable, the file write is the only write.
**Dual-write (default):** Always ALSO write findings to `./reports/{topic_slug}/findings_{dimension}.json`. This is the default behavior — blackboard has a 24h TTL but files persist. If blackboard is unavailable, the file write is the only write.

### Step 5.5: Self-Reflection Protocol
### Step 11.5: Self-Reflection Protocol

After writing initial findings, verify research quality before signaling completion.

Expand Down Expand Up @@ -232,15 +232,15 @@ If final confidence < 0.5:
```
blackboard_write(scope="{scope}", key="findings_{dimension}", value={updated findings})
```
Also write to `./reports/{topic-slug}/findings_{dimension}.json`.
Also write to `./reports/{topic_slug}/findings_{dimension}.json`.

### Step 6: Check for Cross-Dimension Conflicts
### Step 12: Check for Cross-Dimension Conflicts
Read other dimensions' findings from blackboard:
```
blackboard_read(scope="{scope}", key="findings_{other_dimension}")
```

**Dual-read:** Also check `./reports/{topic-slug}/findings_{other_dimension}.json` if blackboard read returns empty or fails.
**Dual-read:** Also check `./reports/{topic_slug}/findings_{other_dimension}.json` if blackboard read returns empty or fails.

If contradictions found:
```
Expand All @@ -251,7 +251,7 @@ blackboard_alert(scope="{scope}",channel="conflict_detected", message={
})
```

### Step 7: Signal Completion
### Step 13: Signal Completion

1. **Alert via blackboard** (cross-agent awareness):
```
Expand All @@ -269,9 +269,9 @@ blackboard_alert(scope="{scope}",channel="conflict_detected", message={
to: "team-lead",
message: {
dimension: "{dimension}",
topic_slug: "{topic-slug}",
topic_slug: "{topic_slug}",
findings_key: "findings_{dimension}",
findings_path: "./reports/{topic-slug}/findings_{dimension}.json",
findings_path: "./reports/{topic_slug}/findings_{dimension}.json",
finding_count: N,
confidence_avg: "high|medium|low"
},
Expand All @@ -288,14 +288,14 @@ For significant findings during research:
blackboard_alert(scope="{scope}",channel="finding_discovered", message="Brief description of significant finding")
```

### Step 8: Capture to Atlatl
### Step 14: Capture to Atlatl
Persist key findings to long-term memory:
```
capture_memory(
title="{dimension} analysis: {topic}",
namespace="_semantic/knowledge",
memory_type="semantic",
tags=["sigint-research", "{topic-slug}", "{dimension}"],
tags=["sigint-research", "{topic_slug}", "{dimension}"],
confidence=0.8,
content="Key findings summary..."
)
Expand All @@ -313,7 +313,7 @@ Then `enrich_memory(id)`.
| tech | tech-assessment | `findings_tech` |
| financial | financial-analysis | `findings_financial` |
| regulatory | regulatory-review | `findings_regulatory` |
| trend_modeling | trend-modeling | `findings_trend_modeling` |
| trend_modeling | trend-modeling | `findings_trend_modeling` | <!-- Note: trend_modeling uses underscore (not hyphen) because it matches the skill's internal identifier. Intentional exception to the hyphen convention. -->

## Quality Standards

Expand Down
22 changes: 14 additions & 8 deletions agents/issue-architect.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,16 +43,21 @@ description: |
model: inherit
color: green
tools:
- Read
- Write
- Bash
- Grep
- Glob
- ToolSearch
- Grep
- Read
- SendMessage
- TaskUpdate
- TaskList
- TaskGet
- TaskList
- TaskUpdate
- ToolSearch
- Write
- mcp__atlatl__capture_memory
- mcp__atlatl__enrich_memory
- mcp__atlatl__recall_memories
- mcp__github__issue_read
- mcp__github__issue_write
---

You are an expert issue architect specializing in converting business intelligence, research findings, and strategic recommendations into well-structured, actionable GitHub issues. Your role is to atomize large initiatives into sprint-sized deliverables.
Expand Down Expand Up @@ -222,6 +227,7 @@ Before creating ANY issues, you MUST:
### Step 5: Create or Preview
- If dry-run: Display issues for review
- If creating: Use GitHub MCP or `gh` CLI
- If neither GitHub MCP nor `gh` CLI is available: write issues to `./reports/{topic_slug}/issues-dry-run.json` and notify the user that issues were saved locally
- Apply labels and assignments
- Link related issues

Expand All @@ -230,7 +236,7 @@ Before creating ANY issues, you MUST:
Before creating issues (if not dry-run), self-review the planned issues against findings data:

**Step 5.5a: Load findings for cross-reference**
Read `./reports/{topic-slug}/state.json` to get the authoritative findings array.
Read `./reports/{topic_slug}/state.json` to get the authoritative findings array.

**Step 5.5b: Verify issue-finding linkage**
For each planned issue:
Expand Down Expand Up @@ -268,7 +274,7 @@ SendMessage(
issues_created: N,
categories: { features: N, enhancements: N, research: N, action_items: N },
urls: ["https://github.com/.../issues/N", ...],
manifest: "./reports/{topic-slug}/YYYY-MM-DD-issues.json"
manifest: "./reports/{topic_slug}/YYYY-MM-DD-issues.json"
},
summary: "Issues created: {N} total ({features} features, {research} research)"
)
Expand Down
Loading