Skip to content

Local first human friendly agents toolkit for the browser and Nodejs

License

Notifications You must be signed in to change notification settings

synw/agent-smith

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Agent Smith

An api to create local first human friendly agents in the browser or Nodejs

Agent Smith

πŸ“š Read the documentation

Check the πŸ’» examples

What is an agent?

An agent is an anthropomorphic representation of a bot. It can:

  • Think: use language model servers to perform inference queries
  • Interact: perform interactions with the user and get input and feedback
  • Work: manage long running jobs with multiple tasks, use custom terminal commands
  • Remember: use transient or semantic memory to store data

Packages

Version Name Description Nodejs Browser
pub package @agent-smith/body The body ❌ βœ…
pub package @agent-smith/brain The brain βœ… βœ…
pub package @agent-smith/jobs Jobs βœ… βœ…
pub package @agent-smith/tmem Transient memory ❌ βœ…
pub package @agent-smith/tmem-jobs Jobs transient memory ❌ βœ…
pub package @agent-smith/smem Semantic memory βœ… ❌
pub package @agent-smith/tfm Templates for models βœ… βœ…
pub package @agent-smith/lmtask Yaml model task βœ… βœ…
pub package @agent-smith/cli Terminal client βœ… ❌

Philosophy

  • Composable: the packages have limited responsibilities and can work together
  • Declarative: focus on the business logic by expressing features simply
  • Explicit: keep it simple and under user control: no hidden magic

FAQ

  • What local or remote inference servers can I use?

Actually it works with Llama.cpp, Koboldcpp and Ollama.

It also works in the browser using gpu only inference and small models

  • Can I use this with OpenAI or other big apis?

Sorry no: this library favours local first or private remote inference servers

Example

Terminal client

Simple inference query (using the inference plugin):

lm q list the planets of the solar system

Query a thinking model, use qwq (from the models plugin)::

lm think "solve this math problem: ..." m=qwq

Compare images (using the vision plugin):

lm vision img1.jpg img2.jpg "Compare the images"

Generate a commit message in a git repository (using the git plugin):

lm commit

Terminal client plugins

Version Name Description Doc
pub package @agent-smith/feat-models Models doc
pub package @agent-smith/feat-inference Inference doc
pub package @agent-smith/feat-vision Vision doc
pub package @agent-smith/feat-git Git doc

Nodejs example

const backend = useLmBackend({
    name: "koboldcpp",
    localLm: "koboldcpp",
    onToken: (t) => process.stdout.write(t),
});

const ex = useLmExpert({
    name: "koboldcpp",
    backend: backend,
    template: templateName,
    model: { name: modelName, ctx: 2048 },
});
const brain = useAgentBrain([expert]);

console.log("Auto discovering brain backend ...");
await brain.init();
brain.ex.checkStatus();
if (brain.ex.state.get().status != "ready") {
        throw new Error("The expert's backend is not ready")
    }
// run an inference query
const _prompt = "list the planets of the solar sytem";
await brain.think(_prompt, { 
   temperature: 0.2, 
   min_p: 0.05 
});

Server api example

To execute a task using the server http api:

import { useServer } from "@agent-smith/apicli";

const api = useServer({
    apiKey: "server_api_key",
    onToken: (t) => {
        // handle the streamed tokens here
        process.stdout.write(t)
    }
});
await api.executeTask(
    "translate", 
    "Which is the largest planet of the solar system?", 
    { lang: "german" }
)

Libraries

The cli is powered by:

  • Nanostores for the state management and reactive variables
  • Locallm for the inference api servers management
  • Modprompt for the prompt templates management

The server is powered by:

About

Local first human friendly agents toolkit for the browser and Nodejs

Resources

License

Stars

Watchers

Forks