feat(agents): add LLM Council agent and councilor subagents#1327
feat(agents): add LLM Council agent and councilor subagents#1327andrewDoing wants to merge 1 commit intomicrosoft:mainfrom
Conversation
…stion synthesis - add council agent to synthesize responses from GPT-5.4, Opus 4.6, and Gemini 3.1 Pro - implement individual councilor agents for each model - update documentation to reflect new council structure and capabilities 🔍 - Generated by Copilot
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #1327 +/- ##
==========================================
+ Coverage 87.63% 87.65% +0.01%
==========================================
Files 61 61
Lines 9328 9328
==========================================
+ Hits 8175 8176 +1
+ Misses 1153 1152 -1
Flags with carried forward coverage won't be shown. Click here to find out more. 🚀 New features to boost your workflow:
|
katriendg
left a comment
There was a problem hiding this comment.
This new agent & subagents are a great idea.
What I'm interested in understanding better is how it fits into RPI, and which scenarios to use it for.
Documentation needed:
We also need to document this new option, otherwise it might be another agent folks don't fully grasp at what stage to use best. Optimally fully document a few key scenarios.
RPI integration points?
Can it be used after research/does a user select context files from the branch, where is the context typically for having the council subagents compare with?
Can we integrate it in Task Reviewer optionally? Is that the right fit as you'd have good context, but may lack the different implementation options.
Please also ensure you have ran /prompt-analyze prompt on the selected assets?
Finally, once the assets are added to the right collections and maturity, run the plugin regeneration.
| kind: agent | ||
| - path: .github/agents/hve-core/doc-ops.agent.md | ||
| kind: agent | ||
| - path: .github/agents/hve-core/council.agent.md |
There was a problem hiding this comment.
I love this new addition but I think we need to make sure we get some user feedback before pushing it into the main collection in stable.
Either, add maturity: experimental to each artifact, in all collections you touched.
Or add it to experimental.collection.yml instead?
| kind: agent | ||
| - path: .github/agents/github/github-backlog-manager.agent.md | ||
| kind: agent | ||
| - path: .github/agents/hve-core/council.agent.md |
There was a problem hiding this comment.
Please add maturity: experimental to each of the newly added items
Pull Request
Description
Adds an LLM Council agent that dispatches the same question package to GPT-5.4, Opus 4.6, and Gemini 3.1 Pro councilor subagents, then synthesizes consensus, disagreement, assumptions, uncertainty, and a recommended next step into one user-facing answer.
This contribution is scoped as a reusable decision-support pattern for hve-core rather than a broader orchestration platform. It keeps the first upstream cut focused on agent artifacts and discoverability updates.
Related Issue(s)
Closes #1326
Type of Change
Select all that apply:
Code & Documentation:
Infrastructure & Configuration:
AI Artifacts:
prompt-builderagent and addressed all feedback.github/instructions/*.instructions.md).github/prompts/*.prompt.md).github/agents/*.agent.md).github/skills/*/SKILL.md)Other:
.ps1,.sh,.py)Sample Prompts (for AI Artifact Contributions)
User Request:
Compare two implementation approaches and tell me where the models agree, where they disagree, and what we should do next.
Execution Flow:
Output Artifacts:
The agent returns one synthesized response rather than three separate answers. The response is expected to include a direct answer, consensus points, disagreement and uncertainty, model-by-model summaries, and a final recommendation.
Success Indicators:
The response clearly distinguishes agreement from disagreement, identifies uncertainty, and ends with an actionable recommendation instead of a raw transcript dump.
Testing
Validation has not been run yet in this PR creation flow.
Recommended validation before merge:
npm run lint:mdnpm run spell-checknpm run lint:frontmatternpm run validate:skillsnpm run lint:md-linksnpm run lint:psnpm run plugin:generatenpm run docs:testChecklist
Required Checks
AI Artifact Contributions
/prompt-analyzeto review contributionprompt-builderreviewRequired Automated Checks
The following validation commands must pass before merging:
npm run lint:mdnpm run spell-checknpm run lint:frontmatternpm run validate:skillsnpm run lint:md-linksnpm run lint:psnpm run plugin:generatenpm run docs:testSecurity Considerations
Additional Notes
This PR adds a decision-support agent, not a replacement for RPI or PR Review workflows. The value is independent multi-model reasoning with explicit synthesis.