Skip to content

feat: add ai to detect clankers#1

Open
huang-julien wants to merge 14 commits intounveil-project:mainfrom
huang-julien:feat/anti_clanker_ai
Open

feat: add ai to detect clankers#1
huang-julien wants to merge 14 commits intounveil-project:mainfrom
huang-julien:feat/anti_clanker_ai

Conversation

@huang-julien
Copy link
Copy Markdown
Contributor

Extract the logic of MatteoGabriele/agentscan-action#15 to unveil-project

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds an optional AI-assisted analysis module to @unveil/identity, allowing heuristic identity signals to be augmented with an LLM-based classification via GitHub Models, and exposes it as a new @unveil/identity/ai subpath.

Changes:

  • Introduces src/ai/* (prompt builder, types, and a GitHub Models-backed getAIAnalysis implementation).
  • Updates build/packaging to emit and export the new ./ai entrypoint, and adds the voight-kampff-compactor dependency.
  • Adds a CLI script to run heuristic + AI analysis and documents the new API in the README.

Reviewed changes

Copilot reviewed 8 out of 9 changed files in this pull request and generated 9 comments.

Show a summary per file
File Description
tsdown.config.ts Adds an explicit multi-entry build including the new AI entrypoint.
src/ai/types.ts Defines input/output types for AI analysis.
src/ai/prompt.ts Adds the system prompt and prompt-construction logic (with event slimming/compaction).
src/ai/index.ts Re-exports the AI module surface as a subpath entry.
src/ai/analysis.ts Implements getAIAnalysis calling GitHub Models and parsing the response.
scripts/ai-analyser-user.ts Adds a runnable script to fetch user data/events and invoke the AI analysis.
README.md Documents AI-enhanced usage and supported models.
package.json Exposes ./ai, adds script, and adds runtime dependency.
pnpm-lock.yaml Locks the new dependency and related lock updates.
Files not reviewed (1)
  • pnpm-lock.yaml: Language not supported

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +92 to +96
export function buildUserPrompt(input: AIAnalysisInput): string {

const events = slimEvents(input.events);
const eventDates = input.events
.map(e => e.created_at)
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

buildUserPrompt is typed to require token and model via AIAnalysisInput, but it doesn't use either field. Consider changing the parameter type to a narrower shape (e.g., a dedicated AIPromptInput or a Pick excluding auth/provider fields) to keep prompt construction decoupled from transport/auth concerns.

Copilot uses AI. Check for mistakes.
Comment on lines +5 to +6
'src/index.ts',
'src/ai/index.ts',
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This config file otherwise uses double quotes, but the new entry paths use single quotes. For consistency with the established style in this repo, switch these strings to double quotes.

Suggested change
'src/index.ts',
'src/ai/index.ts',
"src/index.ts",
"src/ai/index.ts",

Copilot uses AI. Check for mistakes.
Comment on lines +1 to +3
export * from './analysis'
export * from './types'
export * from './prompt' No newline at end of file
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The repo consistently uses double quotes and semicolons (e.g., src/index.ts), but this barrel file uses single quotes and omits semicolons. Aligning with the existing style improves consistency and reduces churn in future edits.

Suggested change
export * from './analysis'
export * from './types'
export * from './prompt'
export * from "./analysis";
export * from "./types";
export * from "./prompt";

Copilot uses AI. Check for mistakes.
Comment on lines +37 to +39
for (let page = 1; page <= 2; page++) {
const res = await fetch(
`https://api.github.com/users/${username}/events?per_page=300&page=${page}`,
Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GitHub’s REST API caps per_page at 100; using per_page=300 is ignored and makes the script’s intent unclear. Use per_page=100 (matching scripts/fetch-github-events.js) and adjust paging logic accordingly.

Suggested change
for (let page = 1; page <= 2; page++) {
const res = await fetch(
`https://api.github.com/users/${username}/events?per_page=300&page=${page}`,
const perPage = 100;
const maxEvents = 600;
const maxPages = Math.ceil(maxEvents / perPage);
for (let page = 1; page <= maxPages; page++) {
const res = await fetch(
`https://api.github.com/users/${username}/events?per_page=${perPage}&page=${page}`,

Copilot uses AI. Check for mistakes.
Comment on lines +78 to +79
`getAIAnalysis` accepts any model available on GitHub Models (e.g. `openai/gpt-4o`, `deepseek/DeepSeek-R1`). It returns `null` if the model produces no usable response.

Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The README says getAIAnalysis “returns null if the model produces no usable response,” but the current implementation throws on HTTP errors and JSON parse failures. Either update the README to describe the throwing behavior, or change the implementation to catch/return null for these failure modes.

Copilot uses AI. Check for mistakes.
Comment on lines +4 to +7
export async function getAIAnalysis(input: AIAnalysisInput): Promise<AIAnalysisResult | null> {
const { token, model = 'openai/gpt-4o-mini', username, analysis, accountCreatedAt, publicRepos, events } = input;
const prompt = buildUserPrompt({ token, model, username, analysis, accountCreatedAt, publicRepos, events });

Copy link

Copilot AI Apr 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New AI prompt construction / response parsing logic is introduced here, but there are no tests covering key behaviors (e.g., prompt includes orgs/heuristic summary, response sanitization/JSON parsing). Adding a small vitest suite with mocked fetch and snapshot/unit assertions would help prevent regressions.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants