Skip to content

Add examples#3

Draft
parteeksingh24 wants to merge 15 commits intomainfrom
add-examples
Draft

Add examples#3
parteeksingh24 wants to merge 15 commits intomainfrom
add-examples

Conversation

@parteeksingh24
Copy link
Copy Markdown
Contributor

@parteeksingh24 parteeksingh24 commented Sep 26, 2025

Summary by CodeRabbit

  • New Features
    • Added example agents: Telegram bot integration, Streaming patterns, and LLM-as-a-judge.
  • Documentation
    • Added Telegram setup guide and .env example for bot token.
    • Updated README with new agents; improved code block formatting in multiple READMEs.
    • Clarified comments in test-suite.
  • Bug Fixes
    • Improved Slack verification to safely ignore the bot’s own messages.
  • Refactor
    • Streamlined Slack agent to reply immediately and process work in the background.
  • Chores
    • Upgraded @agentuity/sdk to ^0.0.147.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Sep 26, 2025

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

Adds three new example agents (Telegram, Streaming, LLM Judge), associated docs and configs. Introduces Telegram helper utilities and README. Updates Slack agent to use background processing via waitUntil. Extends agent registry and README entries. Minor README formatting tweaks, test-suite comment updates, env example note, and SDK version bump.

Changes

Cohort / File(s) Summary
Telegram example agent
src/agents/example-telegram/index.ts, src/agents/example-telegram/telegram.ts, src/agents/example-telegram/README.md, .env.example
New Telegram agent with webhook handling, KV chat history, OpenAI reply generation, and Telegram API send; background processing via waitUntil; new types and API helper; README and env comment added.
Streaming example agent
src/agents/example-streaming/index.ts
New multi-path streaming agent with agent-chaining, LLM streaming, and manual streaming flows; uses resp.stream, created streams, and waitUntil; exports welcome().
LLM judge example agent
src/agents/example-llm-judge/index.ts, src/agents/example-llm-judge/story-eval.ts
New agent evaluating two generated stories using a zod schema; integrates a provider agent, produces structured evaluation and formatted markdown report; exports welcome() and schema utilities.
Slack agent adjustments
src/agents/example-slack/index.ts, src/agents/example-slack/slack.ts
Restores manual trigger handling and immediate 200 OK; moves processing into waitUntil; optional chaining in bot user check; retains existing flow with non-blocking webhook handling.
Configuration and registry
agentuity.yaml, README.md
Adds three agents to registry and README Available Agents table with descriptions.
Docs and test suite comments
src/agents/test-suite/index.ts, src/agents/example-composio/README.md, src/agents/example-discord/README.md
Adds/updates comments for example agents; normalizes punctuation; README code fences use ```env for examples.
Dependency update
package.json
Bumps @agentuity/sdk from ^0.0.146 to ^0.0.147; formatting newline restored.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor Telegram as Telegram
  participant Agent as example-telegram
  participant KV as KV Store
  participant LLM as OpenAI
  participant TGAPI as Telegram API

  Telegram->>Agent: POST /webhook (update)
  Note right of Agent: Validate + filter update
  Agent-->>Telegram: 200 OK (immediate)
  rect rgba(200,220,255,0.2)
  Note over Agent: Background via waitUntil
  Agent->>KV: get(chat_history)
  KV-->>Agent: history[]
  Agent->>KV: put(updated_history, TTL=86400)
  Agent->>LLM: generate(reply | system+history+user)
  LLM-->>Agent: text
  Agent->>TGAPI: sendMessage(chat_id, text, reply_to?)
  TGAPI-->>Agent: ok/json
  Agent->>KV: put(history+assistant, TTL=86400)
  end
Loading
sequenceDiagram
  autonumber
  actor Slack as Slack
  participant Agent as example-slack
  participant LLM as OpenAI
  participant SlackAPI as Slack API

  Slack->>Agent: Event/Webhook
  Agent-->>Slack: 200 OK (verification/ack)
  rect rgba(200,220,255,0.2)
  Note over Agent: Background via waitUntil
  Agent->>Agent: Build thread/history
  Agent->>LLM: Generate reply
  LLM-->>Agent: text
  Agent->>SlackAPI: postMessage(thread_ts, text)
  SlackAPI-->>Agent: ok
  end
Loading
sequenceDiagram
  autonumber
  participant Client as Client
  participant Agent as example-streaming
  participant Other as example-chat
  participant LLM as OpenAI
  participant Streams as Stream Service

  Client->>Agent: Prompt
  alt "Agent Chain"
    Agent->>Other: request(stream=true)
    Other-->>Agent: stream
    Agent-->>Client: resp.stream(stream)
  else "LLM Streaming"
    Agent->>Streams: create(llm-summary)
    Agent-->>Client: JSON(stream info)
    Agent->>LLM: streamText(...)
    LLM-->>Agent: tokens
    Agent->>Streams: pipe tokens
  else "Manual Streaming"
    Agent->>Streams: create(batch-processing, compressed)
    Agent-->>Client: JSON(stream info)
    loop items 1..100
      Agent->>Streams: write(processed item)
    end
    Agent->>Streams: close()
  else default
    Agent-->>Client: Invalid input message
  end
Loading
sequenceDiagram
  autonumber
  participant User as User
  participant Judge as example-llm-judge
  participant Provider as gateway-provider
  participant LLM as OpenAI

  User->>Judge: Request (prompt?)
  Judge->>Provider: Generate Story A
  Provider-->>Judge: text A
  Judge->>Provider: Generate Story B
  Provider-->>Judge: text B
  Judge->>LLM: generateObject(schema: evaluation)
  LLM-->>Judge: evaluation JSON
  Judge-->>User: Markdown report
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Poem

A hop, a tap, new streams untied,
Telegram pings, the judge presides.
Slack whispers back with quiet might,
Three fresh trails in moonlit byte.
I thump approval, ears held high—
Ship it swift; let packets fly. 🐇✨

Pre-merge checks and finishing touches

❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Title Check ❓ Inconclusive The title “Add examples” is overly generic and does not clearly convey which examples or components are being introduced, making it hard for a reviewer to immediately understand the scope of the changes. Please update the title to specify the key additions, for example “Add example Telegram, streaming, and LLM-judge agents,” so that the primary changes are clear at a glance.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@agentuity-agent
Copy link
Copy Markdown

agentuity-agent Bot commented Sep 26, 2025

🧪 Preview Environments Are Live — kitchen-sink-ts

⚠️ Warning: Currently preview environments use the same resources and environment variables as production.

🤖 kitchen-sink

No input channels configured — ➕ Add I/O

🤖 storage-key-value

No input channels configured — ➕ Add I/O

🤖 storage-object-store

No input channels configured — ➕ Add I/O

🤖 gateway-provider

No input channels configured — ➕ Add I/O

🤖 gateway-byo-token

No input channels configured — ➕ Add I/O

🤖 frameworks-provider

No input channels configured — ➕ Add I/O

🤖 io-agent

No input channels configured — ➕ Add I/O

🤖 test-suite

IO Type Access
⏰ Cron Coming Soon 🚀

🤖 io-api

IO Type Access
🛜 API POST https://agentuity.ai/api/agent_b61035c3341ac311663fad947c57eb7e/1dbcf43
📋 Curl Examples

ℹ️ Info: API & Webhooks requiring authentication will include placeholder tokens in the curl examples. Replace these with your actual credentials.
API Endpoint:

curl -X POST https://agentuity.ai/api/agent_b61035c3341ac311663fad947c57eb7e/1dbcf43 \
  -H "Content-Type: application/json" \
  -d '{}'

🤖 io-cron

No input channels configured — ➕ Add I/O

🤖 io-email

IO Type Access
📩 Email Coming Soon 🚀

🤖 io-sms

IO Type Access
📱 SMS Coming Soon 🚀

🤖 io-webhook

IO Type Access
🪝 Webhook POST https://agentuity.ai/webhook/agent_b45feaf99bbdb042b3e8838c8544abe7/1dbcf43
📋 Curl Examples

ℹ️ Info: API & Webhooks requiring authentication will include placeholder tokens in the curl examples. Replace these with your actual credentials.
Webhook Endpoint:

curl -X POST https://agentuity.ai/webhook/agent_b45feaf99bbdb042b3e8838c8544abe7/1dbcf43 \
  -H "Content-Type: application/json" \
  -d '{}'

🤖 observability-logging

No input channels configured — ➕ Add I/O

🤖 observability-tracing

No input channels configured — ➕ Add I/O

🤖 handler-request

No input channels configured — ➕ Add I/O

🤖 handler-response

IO Type Access
🛜 API POST https://agentuity.ai/api/agent_a2d37fcdd47a1df58c621022909932ba/1dbcf43
📋 Curl Examples

ℹ️ Info: API & Webhooks requiring authentication will include placeholder tokens in the curl examples. Replace these with your actual credentials.
API Endpoint:

curl -X POST https://agentuity.ai/api/agent_a2d37fcdd47a1df58c621022909932ba/1dbcf43 \
  -H "Content-Type: application/json" \
  -d '{}'

🤖 handler-context

No input channels configured — ➕ Add I/O

🤖 storage-vector

No input channels configured — ➕ Add I/O

🤖 example-discord

No input channels configured — ➕ Add I/O

🤖 example-chat

No input channels configured — ➕ Add I/O

🤖 example-slack

IO Type Access
🛜 API POST https://agentuity.ai/api/agent_2ddc01490cd360ec3d1d594999bda18e/1dbcf43
📋 Curl Examples

ℹ️ Info: API & Webhooks requiring authentication will include placeholder tokens in the curl examples. Replace these with your actual credentials.
API Endpoint:

curl -X POST https://agentuity.ai/api/agent_2ddc01490cd360ec3d1d594999bda18e/1dbcf43 \
  -H "Content-Type: application/json" \
  -d '{}'

🤖 example-composio

No input channels configured — ➕ Add I/O

🤖 example-telegram

IO Type Access
🪝 Webhook POST https://agentuity.ai/webhook/agent_52f73139a881ee0e1cdfafb3c6404e70/1dbcf43
📋 Curl Examples

ℹ️ Info: API & Webhooks requiring authentication will include placeholder tokens in the curl examples. Replace these with your actual credentials.
Webhook Endpoint:

curl -X POST https://agentuity.ai/webhook/agent_52f73139a881ee0e1cdfafb3c6404e70/1dbcf43 \
  -H "Content-Type: application/json" \
  -d '{}'

📊 Resource Allocation
Resource Allocation
🧠 Memory 1,000 B
🧮 CPU 0.5 m
💾 Disk 150 kB

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e9e9117 and 7ba7ea1.

⛔ Files ignored due to path filters (4)
  • .github/example-telegram/telegram-setup-1-api-endpoint.png is excluded by !**/*.png
  • .github/example-telegram/telegram-setup-2-bot-father.png is excluded by !**/*.png
  • .github/example-telegram/telegram-setup-3-token.png is excluded by !**/*.png
  • bun.lock is excluded by !**/*.lock
📒 Files selected for processing (7)
  • .env.example (1 hunks)
  • agentuity.yaml (1 hunks)
  • package.json (1 hunks)
  • src/agents/example-telegram/README.md (1 hunks)
  • src/agents/example-telegram/index.ts (1 hunks)
  • src/agents/example-telegram/telegram.ts (1 hunks)
  • src/agents/test-suite/index.ts (2 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx}: Prefer loading types from the @agentuity/sdk package (from node_modules)
Import types from @agentuity/sdk
Use structured error handling with try/catch blocks

Files:

  • src/agents/example-telegram/telegram.ts
  • src/agents/example-telegram/index.ts
  • src/agents/test-suite/index.ts
🧠 Learnings (2)
📚 Learning: 2025-09-03T20:25:54.918Z
Learnt from: CR
PR: agentuity/kitchen-sink-ts#0
File: AGENTS.md:0-0
Timestamp: 2025-09-03T20:25:54.918Z
Learning: Applies to **/*.{ts,tsx} : Import types from `agentuity/sdk`

Applied to files:

  • package.json
📚 Learning: 2025-09-03T20:25:54.918Z
Learnt from: CR
PR: agentuity/kitchen-sink-ts#0
File: AGENTS.md:0-0
Timestamp: 2025-09-03T20:25:54.918Z
Learning: Applies to **/*.{ts,tsx} : Prefer loading types from the `agentuity/sdk` package (from node_modules)

Applied to files:

  • package.json
🔇 Additional comments (5)
package.json (1)

31-31: Verify @agentuity/sdk 0.0.146 before merging.

^0.0.146 wasn’t published the last time I checked, so installs could fail if the tag still isn’t live. Please confirm the package exists (or bump to the latest released version).

Based on learnings

.env.example (1)

14-15: Nice touch documenting the Telegram token.

The placeholder keeps the sample environment file aligned with the new bot example so folks can configure it without guesswork.

agentuity.yaml (1)

148-153: New agent entries are wired in cleanly.

example-teams and example-telegram now surface like the other kitchen sink agents, so they’ll be available to the CLI and dashboard without extra manual edits.

src/agents/example-telegram/README.md (1)

1-117: README comprehensively guides the Telegram setup.

End-to-end steps (deploy, BotFather, env vars, webhook curl, testing, limits) cover every piece needed to bring the bot online—great developer experience.

src/agents/test-suite/index.ts (1)

68-110: Appreciate the inline rationale for skipping Telegram in automation.

Leaving the entries commented with context about webhook constraints prevents accidental load while still documenting how to exercise the agent.

Comment thread src/agents/example-telegram/index.ts Outdated
Comment thread src/agents/example-telegram/telegram.ts
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
src/agents/example-telegram/README.md (1)

35-37: Add a language hint to the .env fenced block.

markdownlint is flagging the unlabeled fence. Tag it as dotenv (or another appropriate lexer) so the lint check passes and readers get syntax highlighting.

-```
+```dotenv
 TELEGRAM_BOT_TOKEN=1234567890:ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghi

</blockquote></details>

</blockquote></details>

<details>
<summary>📜 Review details</summary>

**Configuration used**: CodeRabbit UI

**Review profile**: CHILL

**Plan**: Pro

<details>
<summary>📥 Commits</summary>

Reviewing files that changed from the base of the PR and between 7ba7ea1a339b3142b75f01e23e79bbe4987b7090 and 1dbcf43feb4ece37cd82da7b7952ebd453faa1a4.

</details>

<details>
<summary>📒 Files selected for processing (4)</summary>

* `agentuity.yaml` (1 hunks)
* `src/agents/example-telegram/README.md` (1 hunks)
* `src/agents/example-telegram/index.ts` (1 hunks)
* `src/agents/example-telegram/telegram.ts` (1 hunks)

</details>

<details>
<summary>🚧 Files skipped from review as they are similar to previous changes (1)</summary>

* src/agents/example-telegram/telegram.ts

</details>

<details>
<summary>🧰 Additional context used</summary>

<details>
<summary>📓 Path-based instructions (1)</summary>

<details>
<summary>**/*.{ts,tsx}</summary>


**📄 CodeRabbit inference engine (AGENTS.md)**

> `**/*.{ts,tsx}`: Prefer loading types from the `@agentuity/sdk` package (from node_modules)
> Import types from `@agentuity/sdk`
> Use structured error handling with try/catch blocks

Files:
- `src/agents/example-telegram/index.ts`

</details>

</details><details>
<summary>🧬 Code graph analysis (1)</summary>

<details>
<summary>src/agents/example-telegram/index.ts (3)</summary><blockquote>

<details>
<summary>src/agents/example-slack/index.ts (1)</summary>

* `Agent` (23-119)

</details>
<details>
<summary>src/agents/example-chat/index.ts (1)</summary>

* `Agent` (11-83)

</details>
<details>
<summary>src/agents/example-telegram/telegram.ts (2)</summary>

* `TelegramUpdate` (4-7)
* `sendTelegramMessage` (28-57)

</details>

</blockquote></details>

</details><details>
<summary>🪛 markdownlint-cli2 (0.18.1)</summary>

<details>
<summary>src/agents/example-telegram/README.md</summary>

35-35: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

</details>

</details>

</details>

</details>

<!-- This is an auto-generated comment by CodeRabbit for review status -->

@parteeksingh24 parteeksingh24 marked this pull request as draft September 26, 2025 23:55
@parteeksingh24 parteeksingh24 changed the title Add Telegram example Add examples Sep 26, 2025
@agentuity-agent agentuity-agent Bot requested a deployment to preview September 27, 2025 00:04 Abandoned
- Use `waitUntil` for background processing
- Add bot profile null safety
@parteeksingh24 parteeksingh24 marked this pull request as ready for review September 29, 2025 23:30
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
src/agents/example-telegram/README.md (1)

35-37: Add language specifier to fenced code block.

The code block is missing a language identifier. Adding bash or env will enable syntax highlighting and improve readability.

Apply this diff:

 For local development, add to your `.env` file:
-```
+```env
 TELEGRAM_BOT_TOKEN=1234567890:ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghi

As per static analysis hints.

</blockquote></details>

</blockquote></details>

<details>
<summary>📜 Review details</summary>

**Configuration used**: CodeRabbit UI

**Review profile**: CHILL

**Plan**: Pro

<details>
<summary>📥 Commits</summary>

Reviewing files that changed from the base of the PR and between 1dbcf43feb4ece37cd82da7b7952ebd453faa1a4 and a97aa26e90869e5f24c36bd78f7788b3172e55c8.

</details>

<details>
<summary>⛔ Files ignored due to path filters (4)</summary>

* `.github/example-telegram/telegram-setup-1-api-endpoint.png` is excluded by `!**/*.png`
* `.github/example-telegram/telegram-setup-2-bot-father.png` is excluded by `!**/*.png`
* `.github/example-telegram/telegram-setup-3-token.png` is excluded by `!**/*.png`
* `bun.lock` is excluded by `!**/*.lock`

</details>

<details>
<summary>📒 Files selected for processing (13)</summary>

* `.env.example` (1 hunks)
* `README.md` (1 hunks)
* `agentuity.yaml` (1 hunks)
* `src/agents/example-llm-judge/index.ts` (1 hunks)
* `src/agents/example-llm-judge/story-eval.ts` (1 hunks)
* `src/agents/example-slack/index.ts` (2 hunks)
* `src/agents/example-slack/slack.ts` (1 hunks)
* `src/agents/example-streaming/index.ts` (1 hunks)
* `src/agents/example-streaming/sample-data.json` (1 hunks)
* `src/agents/example-telegram/README.md` (1 hunks)
* `src/agents/example-telegram/index.ts` (1 hunks)
* `src/agents/example-telegram/telegram.ts` (1 hunks)
* `src/agents/test-suite/index.ts` (2 hunks)

</details>

<details>
<summary>✅ Files skipped from review due to trivial changes (2)</summary>

* src/agents/example-streaming/sample-data.json
* README.md

</details>

<details>
<summary>🚧 Files skipped from review as they are similar to previous changes (2)</summary>

* src/agents/example-telegram/telegram.ts
* src/agents/test-suite/index.ts

</details>

<details>
<summary>🧰 Additional context used</summary>

<details>
<summary>📓 Path-based instructions (1)</summary>

<details>
<summary>**/*.{ts,tsx}</summary>


**📄 CodeRabbit inference engine (AGENTS.md)**

> `**/*.{ts,tsx}`: Prefer loading types from the `@agentuity/sdk` package (from node_modules)
> Import types from `@agentuity/sdk`
> Use structured error handling with try/catch blocks

Files:
- `src/agents/example-llm-judge/story-eval.ts`
- `src/agents/example-llm-judge/index.ts`
- `src/agents/example-slack/slack.ts`
- `src/agents/example-slack/index.ts`
- `src/agents/example-streaming/index.ts`
- `src/agents/example-telegram/index.ts`

</details>

</details><details>
<summary>🧬 Code graph analysis (4)</summary>

<details>
<summary>src/agents/example-llm-judge/index.ts (2)</summary><blockquote>

<details>
<summary>src/agents/test-suite/index.ts (2)</summary>

* `Agent` (34-142)
* `welcome` (22-26)

</details>
<details>
<summary>src/agents/example-llm-judge/story-eval.ts (2)</summary>

* `evaluationSchema` (16-20)
* `formatReport` (25-56)

</details>

</blockquote></details>
<details>
<summary>src/agents/example-slack/index.ts (2)</summary><blockquote>

<details>
<summary>src/agents/example-slack/slack.ts (2)</summary>

* `verifySlackWebhook` (22-121)
* `SlackAgentRequest` (11-19)

</details>
<details>
<summary>src/lib/utils.ts (1)</summary>

* `handleError` (105-114)

</details>

</blockquote></details>
<details>
<summary>src/agents/example-streaming/index.ts (1)</summary><blockquote>

<details>
<summary>src/lib/utils.ts (1)</summary>

* `handleHelpMessage` (17-103)

</details>

</blockquote></details>
<details>
<summary>src/agents/example-telegram/index.ts (2)</summary><blockquote>

<details>
<summary>src/agents/example-chat/index.ts (1)</summary>

* `Agent` (11-83)

</details>
<details>
<summary>src/agents/example-telegram/telegram.ts (2)</summary>

* `TelegramUpdate` (4-7)
* `sendTelegramMessage` (28-57)

</details>

</blockquote></details>

</details><details>
<summary>🪛 markdownlint-cli2 (0.18.1)</summary>

<details>
<summary>src/agents/example-telegram/README.md</summary>

35-35: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

</details>

</details>

</details>

<details>
<summary>🔇 Additional comments (26)</summary><blockquote>

<details>
<summary>src/agents/example-slack/slack.ts (1)</summary><blockquote>

`96-96`: **LGTM! Clean refactor to optional chaining.**

The change simplifies the bot profile check while preserving the same guard logic against infinite loops.

</blockquote></details>
<details>
<summary>src/agents/example-slack/index.ts (5)</summary><blockquote>

`29-32`: **LGTM! Proper manual trigger guard.**

This prevents unintended manual invocations of a webhook-only agent.

---

`34-43`: **LGTM! Correct webhook verification.**

The verification call properly handles Slack challenges, signature validation, and bot message filtering before processing.

---

`45-52`: **LGTM! Proper non-blocking webhook pattern.**

The early `200 OK` response with `ctx.waitUntil` for background processing aligns with Slack's webhook requirements and matches the pattern used in other example agents (e.g., example-telegram).

---

`97-101`: **LGTM! Proper background task error containment.**

Errors inside the background task are logged and handled gracefully without throwing to the outer scope, which is the correct pattern for `ctx.waitUntil` tasks.

---

`86-86`: **No action needed: ‘gpt-5-mini’ is a supported OpenAI model identifier.**

</blockquote></details>
<details>
<summary>src/agents/example-streaming/index.ts (6)</summary><blockquote>

`1-20`: **LGTM! Clean setup and boilerplate.**

Imports follow coding guidelines (types from `@agentuity/sdk`), and the help message handler is properly integrated.

---

`29-105`: **LGTM! Solid manual streaming pattern with proper resource cleanup.**

The manual progress example demonstrates correct usage of `ctx.stream.create()`, `WritableStream` writer lifecycle (`getWriter()`, `releaseLock()`), and background processing with `ctx.waitUntil()`. The `finally` block ensures the stream is always closed.

---

`128-146`: **LGTM! Correct background streaming with pipeTo.**

The pattern properly uses `ctx.waitUntil()` for background LLM streaming, pipes the result to the created stream, and ensures the stream is closed even on error.

---

`170-203`: **LGTM! Clean agent-to-agent streaming.**

The agent chain example correctly uses `ctx.getAgent()`, calls another agent with `run()`, retrieves the response stream, and forwards it with `resp.stream()`. Error handling is appropriate.

---

`208-226`: **LGTM! Well-structured welcome metadata.**

The welcome message provides clear context and links to the streaming guide, with well-defined prompts for each streaming pattern.

---

`108-167`: **‘gpt-5-nano’ model identifier is correct.** Confirmed that OpenAI’s GPT-5 family includes this variant.

</blockquote></details>
<details>
<summary>.env.example (1)</summary><blockquote>

`14-16`: **LGTM!**

The Telegram bot token placeholder follows the established pattern for example agent environment variables and is properly commented out.

</blockquote></details>
<details>
<summary>src/agents/example-llm-judge/story-eval.ts (1)</summary><blockquote>

`1-56`: **LGTM!**

The evaluation schema is well-structured with proper Zod validation constraints (min/max for scoring), clear field descriptions, and a clean formatting function. The schema design appropriately separates story evaluation from verdict, making it easy to consume in the LLM-as-a-judge workflow.

</blockquote></details>
<details>
<summary>agentuity.yaml (1)</summary><blockquote>

`148-156`: **LGTM!**

The three new agent registrations follow the established format and align with the new agent implementations added in this PR. The descriptions are clear and consistent with the existing agent entries.

</blockquote></details>
<details>
<summary>src/agents/example-llm-judge/index.ts (2)</summary><blockquote>

`1-10`: **LGTM!**

The imports and function signature properly use types from `@agentuity/sdk` and follow the coding guidelines.

---

`72-90`: **LGTM!**

The welcome function provides clear documentation and useful example prompts for testing the agent.

</blockquote></details>
<details>
<summary>src/agents/example-telegram/index.ts (9)</summary><blockquote>

`1-4`: **LGTM!**

The imports are well-organized and follow the coding guidelines by loading types from `@agentuity/sdk`. All imported modules are used appropriately in the agent implementation.

---

`11-14`: **LGTM!**

The manual trigger rejection is clear and appropriate for a webhook-only agent.

---

`16-22`: **LGTM!**

The webhook parsing and message filtering logic correctly handles non-text and bot messages, preventing the agent from processing irrelevant updates or creating feedback loops.

---

`24-29`: **LGTM!**

The fire-and-forget pattern is appropriate for Telegram webhook handlers, ensuring a fast 200 OK response while processing the message asynchronously in the background.

---

`31-66`: **LGTM!**

The conversation history management is well-implemented:
- Unique per-chat storage with appropriate error handling
- Reasonable 10-message limit to control context size
- 24-hour TTL aligns with typical chat session lifetimes
- The non-null assertion on line 56 is safe given the earlier guard

---

`76-89`: **LGTM!**

The token validation correctly addresses the previous review comment by failing fast with a clear error message when `TELEGRAM_BOT_TOKEN` is missing. The validated token is then safely passed to `sendTelegramMessage` with proper threading via `replyToMessageId`.

---

`91-103`: **LGTM!**

The assistant response is correctly appended to history with a matching Unix timestamp format, and the error handling follows the coding guidelines by using structured try/catch with proper logging via `ctx.logger`.

---

`105-108`: **LGTM!**

The welcome function provides clear, informative metadata about the Telegram agent with appropriate styling and helpful notes about testing limitations in DevMode.

---

`68-74`: **GPT-5 mini is a valid model name**  
The `gpt-5-mini` variant is officially supported in the OpenAI API alongside `gpt-5` and `gpt-5-nano` ([openai.com](https://openai.com/te-IN/index/introducing-gpt-5-for-developers/?utm_source=openai)). No changes required. 

> Likely an incorrect or invalid review comment.

</blockquote></details>

</blockquote></details>

</details>

<!-- This is an auto-generated comment by CodeRabbit for review status -->

Comment on lines +11 to +70
try {
// Get the prompt from request, or use a default
const prompt =
(await req.data.text()) ||
'Write a short story about an AI learning to paint';

ctx.logger.info('Starting LLM-as-a-judge evaluation');

// Get stories from gateway-provider
const gatewayAgent = await ctx.getAgent({ name: 'gateway-provider' });
const stories = await gatewayAgent.run({ data: prompt });
const storiesText = await stories.data.text();

ctx.logger.debug('Received stories from gateway-provider');

// Create evaluation prompt
const evaluationPrompt = `
You are evaluating two AI-generated short stories.

Here are the stories:

${storiesText}

Extract each story text:
- OpenAI story: appears after "### OpenAI (GPT-5 Nano)"
- Google story: appears after "### Google (Gemini 2.0 Flash)"

For each story, provide:
1. Creativity score (1-10): How original and imaginative is it?
2. Quality score (1-10): Overall writing quality
3. Strengths: What works well (1-2 sentences)

Finally, provide a verdict declaring which story is better and why (2-3 sentences).`;

ctx.logger.info('Generating structured evaluation');

// Generate structured evaluation
const evaluation = await generateObject({
model: openai('gpt-5-nano'),
schema: evaluationSchema,
system:
'You are a literary critic evaluating short AI-generated stories.',
prompt: evaluationPrompt,
});

// Log key metrics
ctx.logger.debug('Evaluation scores', {
openai: evaluation.object.openai.quality,
google: evaluation.object.google.quality,
});

// Return formatted report
return resp.markdown(formatReport(evaluation.object));
} catch (error) {
ctx.logger.error('Error in LLM judge evaluation:', error);
return resp.text(
'Sorry, there was an error running the evaluation. Please ensure the gateway-provider agent is available.'
);
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Verify the model name and consider more robust parsing.

The implementation follows structured error handling guidelines and correctly integrates with the evaluation schema. However, there are two concerns:

  1. Model name verification needed: Line 49 uses openai('gpt-5-nano'), which doesn't match known OpenAI model naming conventions (typically gpt-4, gpt-4-turbo, gpt-3.5-turbo, etc.). If this model doesn't exist, the agent will fail at runtime.

  2. Fragile format coupling: Lines 35-36 hardcode markdown section headers to extract stories from gateway-provider output. If the upstream agent changes its output format, the evaluation prompt won't correctly identify the stories, leading to poor extraction results or evaluation failures.

For issue 1, verify the correct model name:

What OpenAI models are available through the OpenAI API as of 2025? Is gpt-5-nano a valid model?

For issue 2, consider adding format validation or using a more robust parsing approach:

     // Get stories from gateway-provider
     const gatewayAgent = await ctx.getAgent({ name: 'gateway-provider' });
     const stories = await gatewayAgent.run({ data: prompt });
     const storiesText = await stories.data.text();
 
     ctx.logger.debug('Received stories from gateway-provider');
+    
+    // Validate expected format
+    if (!storiesText.includes('### OpenAI') || !storiesText.includes('### Google')) {
+      throw new Error('Unexpected format from gateway-provider agent');
+    }
 
     // Create evaluation prompt
🤖 Prompt for AI Agents
In src/agents/example-llm-judge/index.ts around lines 11 to 70, the review flags
two problems: the call to openai('gpt-5-nano') may reference a non-existent
model and the evaluation prompt relies on brittle hard-coded markdown headers to
locate stories. Fix by replacing the model name with a verified supported model
(e.g., gpt-4 or gpt-4o or configurable via env/ctx config) or make the model
string configurable and validate it at startup; and make story extraction
resilient by having the gateway-provider return structured JSON (preferred) or
include explicit delimiters you parse robustly (e.g., regex with safe fallbacks
and validation checks) and add error handling/logging when parsing fails so the
evaluation uses validated story texts instead of assuming exact headers.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
src/agents/example-telegram/index.ts (1)

16-22: Consider error handling for webhook parsing.

The webhook parsing at line 17 may throw if the payload is malformed. While Telegram webhooks should be reliable, adding a try-catch around the parsing and filtering logic would prevent unhandled exceptions from reaching the platform.

Apply this diff to add error handling:

+  try {
   // Parse webhook update
   const update = await req.data.object<TelegramUpdate>();

   // Filter non-text messages and bot messages
   if (!update?.message?.text || update.message.from?.is_bot) {
     return resp.text('OK');
   }
+  } catch (error) {
+    ctx.logger.error('Error parsing Telegram webhook:', error);
+    return resp.text('OK');
+  }
src/agents/example-telegram/README.md (1)

108-111: Rate limits confirmed. The documented limits (1 message/sec per chat, 20 messages/min per group, and ~30 messages/sec overall for free-tier broadcasts) match the current Telegram Bot API. Optionally, you may note that enabling Paid Broadcasts raises this up to 1,000 messages/sec. citeturn0fetch0turn0fetch0

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a97aa26 and 4559f09.

📒 Files selected for processing (4)
  • src/agents/example-composio/README.md (1 hunks)
  • src/agents/example-discord/README.md (1 hunks)
  • src/agents/example-telegram/README.md (1 hunks)
  • src/agents/example-telegram/index.ts (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • src/agents/example-discord/README.md
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx}: Prefer loading types from the @agentuity/sdk package (from node_modules)
Import types from @agentuity/sdk
Use structured error handling with try/catch blocks

Files:

  • src/agents/example-telegram/index.ts
🧬 Code graph analysis (1)
src/agents/example-telegram/index.ts (2)
src/agents/example-chat/index.ts (1)
  • Agent (11-83)
src/agents/example-telegram/telegram.ts (2)
  • TelegramUpdate (4-7)
  • sendTelegramMessage (28-57)
🔇 Additional comments (6)
src/agents/example-composio/README.md (1)

44-46: LGTM! Documentation formatting improvement.

The switch to an env-tagged code block improves syntax highlighting and aligns with the documentation standards being adopted across other agent READMEs in this PR.

src/agents/example-telegram/README.md (2)

1-66: LGTM! Clear and comprehensive setup guide.

The step-by-step instructions are well-structured and cover all necessary configuration points including security best practices. The inclusion of both local development and production deployment options is helpful.


73-88: LGTM! Accurate functional description.

The description correctly reflects the agent's capabilities and aligns with the implementation in index.ts, including the 10-message history limit and supported chat types.

src/agents/example-telegram/index.ts (3)

6-14: LGTM! Proper type usage and clear trigger handling.

The function signature correctly uses types from @agentuity/sdk as per coding guidelines, and the manual trigger guard provides clear feedback for webhook-only operation.


34-66: LGTM! Solid KV history management with proper error handling.

The conversation history implementation follows established patterns from example-chat, includes structured error handling per coding guidelines, and correctly manages message history with appropriate TTL. The inline type definition is acceptable for this single-use case.


105-108: LGTM! Appropriate welcome message for platform-specific agent.

The welcome export correctly explains the agent's purpose and limitations regarding DevMode testing, which is appropriate for a webhook-based Telegram bot.

Comment thread src/agents/example-telegram/index.ts Outdated
Comment thread src/agents/example-telegram/index.ts Outdated
- Use `waitUntil` for Telegram example
- Update Streaming example to auto-handle encoding
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4559f09 and 011516d.

📒 Files selected for processing (2)
  • src/agents/example-streaming/index.ts (1 hunks)
  • src/agents/example-telegram/index.ts (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/agents/example-telegram/index.ts
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx}: Prefer loading types from the @agentuity/sdk package (from node_modules)
Import types from @agentuity/sdk
Use structured error handling with try/catch blocks

Files:

  • src/agents/example-streaming/index.ts
🧬 Code graph analysis (1)
src/agents/example-streaming/index.ts (1)
src/lib/utils.ts (1)
  • handleHelpMessage (17-103)
🔇 Additional comments (5)
src/agents/example-streaming/index.ts (5)

1-5: LGTM! Clean imports following project guidelines.

Types are correctly imported from @agentuity/sdk as per coding guidelines, and AI SDK usage aligns with best practices from the learnings.


7-20: LGTM! Proper agent structure with help handling.

Function signature is correct and the early help message handling pattern is clean and well-implemented.


28-106: Well-structured manual streaming example with proper resource cleanup.

The pattern correctly demonstrates ctx.stream.create() with manual writes, proper use of ctx.waitUntil() for background processing, and excellent resource management with writer.releaseLock() and stream.close() in the finally block. The 4-second delay at line 83 is appropriate for this example code to demonstrate streaming behavior.


170-204: Excellent agent-to-agent streaming implementation.

The agent chain pattern is well-implemented with proper error handling, informative logging, and correct usage of ctx.getAgent(), agent run(), and resp.stream() for forwarding. The error message helpfully mentions the dependency on example-chat agent.


209-227: Well-crafted welcome function with clear UX.

The welcome message provides helpful context with styled formatting and documentation links. The three prompts correctly correspond to the implemented streaming patterns with appropriate content types.

Comment thread src/agents/example-streaming/index.ts
Comment thread src/agents/example-streaming/index.ts
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 011516d and d43ef34.

⛔ Files ignored due to path filters (1)
  • bun.lock is excluded by !**/*.lock
📒 Files selected for processing (2)
  • package.json (2 hunks)
  • src/agents/example-streaming/index.ts (1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

**/*.{ts,tsx}: Prefer loading types from the @agentuity/sdk package (from node_modules)
Import types from @agentuity/sdk
Use structured error handling with try/catch blocks

Files:

  • src/agents/example-streaming/index.ts
🧠 Learnings (2)
📚 Learning: 2025-09-03T20:25:54.918Z
Learnt from: CR
PR: agentuity/kitchen-sink-ts#0
File: AGENTS.md:0-0
Timestamp: 2025-09-03T20:25:54.918Z
Learning: Applies to **/*.{ts,tsx} : Import types from `agentuity/sdk`

Applied to files:

  • package.json
📚 Learning: 2025-09-03T20:25:54.918Z
Learnt from: CR
PR: agentuity/kitchen-sink-ts#0
File: AGENTS.md:0-0
Timestamp: 2025-09-03T20:25:54.918Z
Learning: Applies to **/*.{ts,tsx} : Prefer loading types from the `agentuity/sdk` package (from node_modules)

Applied to files:

  • package.json
🧬 Code graph analysis (1)
src/agents/example-streaming/index.ts (1)
src/lib/utils.ts (1)
  • handleHelpMessage (17-103)

Comment thread package.json Outdated
},
"dependencies": {
"@agentuity/sdk": "^0.0.146",
"@agentuity/sdk": "^0.0.147",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Dependency version 0.0.147 is not available on npm

npm still lists 0.0.144 as the latest publish for @agentuity/sdk, so installing ^0.0.147 will currently fail. Please either wait for 0.0.147 to be published or pin to the latest available release.
(npmjs.com)

🤖 Prompt for AI Agents
In package.json around line 31, the dependency "@agentuity/sdk": "^0.0.147"
references a version not published to npm; update the version to a published
release (e.g., change to "^0.0.144" or pin to "0.0.144") or revert to the
previous working version, then run npm install (or yarn) to verify resolution
and update lockfile accordingly.

Comment thread src/agents/example-streaming/index.ts
@agentuity-agent agentuity-agent Bot requested a deployment to preview October 23, 2025 00:27 Abandoned
@parteeksingh24 parteeksingh24 marked this pull request as draft October 23, 2025 00:28
@agentuity-agent agentuity-agent Bot requested a deployment to preview October 23, 2025 00:30 Abandoned
- Add Checkly (heartbeat) monitoring for out-of-band agents
- Improve Slack bot (`app_mention` events, retry handling, Groq)
- Update streaming example (list/delete methods)
- Add example images for Teams manifest (`color.png`, `outline.png`)
@agentuity-agent agentuity-agent Bot requested a deployment to preview October 28, 2025 23:37 Abandoned
@agentuity-agent agentuity-agent Bot requested a deployment to preview October 29, 2025 17:39 Abandoned
@agentuity-agent agentuity-agent Bot requested a deployment to preview December 10, 2025 17:46 Abandoned
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant