Skip to content

coquencio/LiteAgent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

71 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LiteAgent NuGet version

LiteAgent is a compact, token-efficient AI agent tool-calling library for .NET. It uses a text-first, TOON (Token-Oriented Object Notation) protocol to represent tool calls and provides a lightweight orchestration runtime for executing plugin methods.


The TOON Advantage

Standard LLM tool calling relies on massive JSON schemas. TOON flattens these structures into a dense, text-based format that LLMs understand natively.

  • Standard JSON: {"tool_calls":[{"id":"1","function":{"name":"greet","arguments":"{\"name\":\"Jorge\"}"}}]} (~60 tokens)
  • TOON (LiteAgent): greet{Jorge} (~6 tokens)
  • TOON Multi-Arg: log{System|Error\\|Critical} (Uses | as separator and \\| for escaping)

Multi-Model Support

LiteAgent supports the most capable models in the industry through official and high-performance SDKs:

  • Azure OpenAI: Enterprise-grade integration.
  • Google Gemini: Official Google.GenAI support (Flash & Pro).
  • Anthropic Claude: High-reasoning agentic workflows via Anthropic.SDK.
  • Generic OpenAI: Support for Ollama, Groq, DeepSeek.

Project Structure

  • Connectors: Pluggable AI clients (ILiteClient) for Azure, Gemini, Claude, and local models.
  • Actions: The core engine that parses TOON and executes methods via compiled delegates for high-performance tool invocation (LiteActions).
  • Tooling: Plugin system and attributes ([LitePlugin]) to expose code efficiently.
  • Extensions: Fluent API for seamless .NET Dependency Injection.
  • Constants: Shared definitions like Roles for messaging.

Quick Start: Implementation Guide

1. Define your Tools

Plugins are plain C# classes to keep your code clean and decoupled. Simply decorate your methods with [LitePlugin] to make them discoverable. If a call fails or the LLM uses wrong syntax, the agent uses MaxRetries (defaulted to 2) to self-correct and try again.

using LiteAgent.Tooling;

public class BusinessTools
{
    [LitePlugin("Sends a greet asking for the name of the person", maxRetries = 5)]
    public string Greet(string name) => $"Hello, {name}!";

    [LitePlugin("Retrieves items by category")]
    public List<string> GetInventory(string category) => 
        new() { "Laptop", "Mouse", "Keyboard" };
}

2. Configuration & DI

LiteAgent is designed to work with the .NET Dependency Injection container. Crucially, all plugin classes must be registered in the Service Provider first.

Option A: Fluent Registration (Recommended)

Use the AddLiteAgent extension to configure parameters and register plugins that are already in the DI container.

using LiteAgent.Extensions;

var builder = Host.CreateApplicationBuilder(args);

// 1. Register your Plugin classes in the DI container
builder.Services.AddSingleton<BusinessTools>();
builder.Services.AddSingleton<InventoryPlugins>();

// 2. Register the AI Connector (can pick also from "AddGeminiLiteClient", "AddClaudeLiteClient" or "AddGenericOpenAILiteClient")
builder.Services.AddAzureOpenAILiteClient(
    apiKey: "your-api-key",
    deploymentName: "gpt-4o-mini",
    endpoint: "https://your-resource.openai.azure.com"
);


// 3. Configure the Agent with the registered plugins
builder.Services.AddLiteAgent(config => 
{
    config.AddPlugin<BusinessTools>();
    config.AddPlugin<InventoryPlugins>();
    config.SetTemperature(0.7f);
    config.SetMaxTokens(1000);
    // Sets the limit for the history pruning (Default: 128,000)
    config.SetMaxContextTokens(128000);
    // Limits the agent's reasoning loop. Prevents infinite tool-calling 
    // and protects your token budget from runaway execution.
    config.SetMaxTurns(10);
});

using IHost host = builder.Build();

Option B: Manual Instance Registration

If your tools are not managed by the DI container, you can still add direct instances to the agent at runtime.

var agent = host.Services.GetRequiredService<LiteOrchestratorAgent>();
var manualTool = new BusinessTools();

agent.RegisterToolInstances(manualTool);

3. Run the Agent

The LiteOrchestratorAgent manages the autonomous Think-Act-Observe cycle. You can provide specific context or instructions right before sending a message.

Option A: Managed History (Internal)

The agent's instance maintains an internal _history list. You can choose to do stateless calls so no memory is stored or keep it for multi-turn conversations.

var agent = host.Services.GetRequiredService<LiteOrchestratorAgent>();

// Add custom instructions or personality at runtime
agent.AddContext("You love to crack some silly jokes when returning final answers.");

// Customize model settings at runtime
agent.Configure(temperature: 0.7f, maxTokens: 1000);


// Start the conversation (stateless: true no memory is preserved after the response, false preserves conversation history)
string response = await agent.SendMessageAsync("Greet Jorge and check the office inventory", stateless: true);


Console.WriteLine($"Agent: {response}");

Option B: External History (Full Control)

Pass your own List<LiteMessage>. The agent will automatically inject system instructions and custom context if they are missing, and apply pruning rules.

var myHistory = new List<LiteMessage>(); // Could be loaded from a Database
string response = await agent.SendMessageAsync("Check inventory", myHistory);

Technical Features

Autonomous Agentic Loop

The agent manages a self-correcting cycle:

  1. System Prompt: Injects dynamic TOON instructions and any AddContext data.
  2. Execute: If the LLM generates a TOON string (e.g., greet{Jorge}), the agent executes the C# method.
  3. Resiliency (MaxRetries): If a tool call fails or has invalid syntax, the agent uses a feedback loop to attempt a fix without crashing the session.
  4. Observe: Results (including execution traces) are fed back to the model.
  5. Finalize: The cycle repeats until the LLM provides a final natural language response.

Smart Dependency Resolution

When using AddPlugin<T>(), the agent resolves the instance directly from your IServiceProvider. This allows your plugins to use their own injected dependencies (like DB Contexts or specialized services) via standard constructor injection.

Multi-Level Execution Governance

LiteAgent provides granular control over the agent's "patience" and budget:

  • MaxTurns: Limits the total number of reasoning cycles per message to prevent infinite loops and runaway costs.

  • MaxRetries: Specifically limits how many times the agent can attempt to fix a single failing tool execution before giving up.

  • Token Usage Tracking: Use agent.GetTokenUsage() at any time to retrieve the accumulated Prompt, Completion, and Total tokens for the current session.

Sequence Orchestrator (Pipelines)

LiteAgent supports Autonomous Chaining. Instead of multiple round-trips between the LLM and your server, the agent can plan and execute a complex sequence of plugins in a single turn using execute_sequence.

Features:

  • Zero-Latency Chaining: Execute Plugin A | Plugin B | Plugin C entirely in C#.
  • Indexed References: Use $1, $2, etc., to pass results from previous steps to the next one.
  • Dot Notation Access: Access specific properties of complex objects (e.g., $1.id or $1.email).
  • Type Discovery: The system automatically exposes return types (like (id:int,name:string)) so the LLM knows exactly which properties are available for chaining.
  • Execution Trace: Returns a summarized trace of each step: [#1: get_user -> success] [#2: get_balance -> 500].

Example: execute_sequence{get_user{Jorge}|get_balance{$1.id}|send_email{$1.email|$2}}

Smart Context Pruning

To avoid "Context Window Exceeded" errors, LiteAgent includes a Pruning Mechanism:

  1. System Preservation: System instructions and custom context are always kept at the top of the stack.

  2. Sliding Window: When EstimateTokens exceeds MaxContextTokens, the oldest conversational messages are removed first.

  3. Automatic Injection: When using external history, EnsureSystemContext verifies that the agent's core instructions are present before processing.


Keywords

LLM, OpenAI, Azure OpenAI, token optimization, reduce tokens, function calling, tool calling, .NET AI, AI agents, Semantic Kernel alternative, prompt optimization, cost reduction, TOON format


Developer documentation

NuGet package

dotnet add package LiteAgent --version 0.1.10

License

MIT

About

A high-performance, token-efficient AI Agent tool-calling library for .NET. Replaces JSON with TOON for token savings.

Topics

Resources

Stars

Watchers

Forks

Contributors

Languages