Skip to content

gitpavleenbali/PYAI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

73 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

openstackai

๐Ÿง  openstackai

Three-Dimensional Intelligence Engine

The Intelligence Engine for Software Factories
Build, Orchestrate, and Scale AI-Native Applications

PyPI Python Tests License Modules Classes

What is openstackai โ€ข Three Dimensions โ€ข Why openstackai โ€ข Software Factories โ€ข Modules โ€ข Ecosystem


๐ŸŽฏ What is openstackai?

openstackai is not just another AI library. It's an Intelligence Engine.

While other frameworks help you call AI models, openstackai embeds intelligence into your software architecture. It's the foundation for building Software Factories โ€” systems that don't just use AI, but think, adapt, and create.

"The best code is the code you never had to write. The best software is the software that writes itself."

%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': 'transparent', 'primaryTextColor': '#ffffff', 'primaryBorderColor': '#ffffff', 'lineColor': '#ffffff', 'secondaryColor': 'transparent', 'tertiaryColor': 'transparent', 'background': 'transparent', 'mainBkg': 'transparent', 'nodeBorder': '#ffffff', 'clusterBkg': 'transparent', 'clusterBorder': '#ffffff', 'titleColor': '#ffffff', 'edgeLabelBackground': 'transparent', 'nodeTextColor': '#ffffff'}}}%%
flowchart LR
    subgraph Traditional["Traditional AI Libraries"]
        A["Your Code"] -->|calls| B["AI API"]
        B -->|returns| A
    end
    
    subgraph openstackai["openstackai Intelligence Engine"]
        C["Application"] <-->|embedded| D["๐Ÿง  openstackai"]
        D <-->|orchestrates| E["Agents"]
        D <-->|manages| F["Memory"]
        D <-->|executes| G["Workflows"]
        D -->|connects| H["LLM Providers"]
    end
Loading

Built on openstackai, our core SDK, openstackai provides 25+ modules with 150+ classes covering every AI use case.


๐Ÿ”บ The Three Dimensions

openstackai operates across three dimensions of intelligence, each building upon the last:

%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': 'transparent', 'primaryTextColor': '#ffffff', 'primaryBorderColor': '#ffffff', 'lineColor': '#ffffff', 'secondaryColor': 'transparent', 'tertiaryColor': 'transparent', 'background': 'transparent', 'mainBkg': 'transparent', 'nodeBorder': '#ffffff', 'clusterBkg': 'transparent', 'clusterBorder': '#ffffff', 'titleColor': '#ffffff', 'edgeLabelBackground': 'transparent', 'nodeTextColor': '#ffffff'}}}%%
flowchart TB
    subgraph D3["๐Ÿญ DIMENSION 3: CREATION"]
        direction LR
        C1["Self-generating<br/>Systems"]
        C2["Code Synthesis<br/>Engines"]
        C3["Autonomous<br/>Development"]
    end
    
    subgraph D2["๐Ÿ”— DIMENSION 2: ORCHESTRATION"]
        direction LR
        O1["Agent<br/>Coordination"]
        O2["Workflow<br/>Automation"]
        O3["Knowledge<br/>Synthesis"]
    end
    
    subgraph D1["๐Ÿง  DIMENSION 1: COGNITION"]
        direction LR
        K1["ask โ€ข research"]
        K2["summarize โ€ข analyze"]
        K3["extract โ€ข generate"]
    end
    
    D1 -->|"builds"| D2
    D2 -->|"enables"| D3
Loading
Dimension Purpose Key Components
๐Ÿง  Cognition Single AI operations ask(), research(), summarize(), extract()
๐Ÿ”— Orchestration Multi-agent coordination Agent, Workflow, Handoff, Patterns
๐Ÿญ Creation Self-generating systems code.write(), code.review(), Software Factories

Dimension 1๏ธโƒฃ โ€” Cognition

The foundation. Single-purpose AI operations that just work.

from openstackai import ask, summarize, extract

# Instant intelligence
answer = ask("Explain quantum entanglement")
summary = summarize(long_document)
entities = extract(text, fields=["names", "dates", "amounts"])

Dimension 2๏ธโƒฃ โ€” Orchestration

Coordinated intelligence. Multiple agents working in harmony.

from openstackai import Agent, Runner
from openstackai.blueprint import Workflow, Step

# Create specialized agents
researcher = Agent(name="Researcher", instructions="Find information.")
analyst = Agent(name="Analyst", instructions="Analyze data deeply.")
writer = Agent(name="Writer", instructions="Write compelling content.")

# Build workflow
workflow = (Workflow("ResearchPipeline")
    .add_step(Step("research", researcher))
    .add_step(Step("analyze", analyst))
    .add_step(Step("write", writer))
    .build())

Dimension 3๏ธโƒฃ โ€” Creation

Self-generating systems. The Software Factory.

from openstackai import code

# Generate code from description
api_code = code.write("REST API for user management with JWT auth")

# Review and improve
review = code.review(existing_code)
improved = code.refactor(old_code, goal="async architecture")

# Generate tests
tests = code.test(my_function)

โœจ Why openstackai: One-Stop Intelligence Solution

%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': 'transparent', 'primaryTextColor': '#ffffff', 'primaryBorderColor': '#ffffff', 'lineColor': '#ffffff', 'secondaryColor': 'transparent', 'tertiaryColor': 'transparent', 'background': 'transparent', 'mainBkg': 'transparent', 'nodeBorder': '#ffffff', 'clusterBkg': 'transparent', 'clusterBorder': '#ffffff', 'titleColor': '#ffffff', 'edgeLabelBackground': 'transparent', 'nodeTextColor': '#ffffff'}}}%%
flowchart TB
    subgraph openstackai["๐Ÿง  openstackai - One-Stop Solution"]
        subgraph Cognition["Cognition"]
            ASK["ask"]
            RES["research"]
            SUM["summarize"]
            RAG["rag"]
            GEN["generate"]
        end
        
        subgraph Orchestration["Orchestration"]
            AGT["Agents"]
            WRK["Workflows"]
            HND["Handoffs"]
            PAT["Patterns"]
        end
        
        subgraph Enterprise["Enterprise"]
            AUTH["Azure AD"]
            SESS["Sessions"]
            EVAL["Evaluation"]
            TRACE["Tracing"]
        end
        
        subgraph Integrations["Integrations"]
            VEC["Vector DBs"]
            API["OpenAPI"]
            PLG["Plugins"]
            MCP["MCP/A2A"]
        end
    end
Loading

The Problem with Current Frameworks

Challenge LangChain CrewAI openstackai Solution
Simple question 10+ lines of setup N/A ask("question")
RAG system 15+ lines, multiple classes N/A 2 lines
Agent with tools Complex chains YAML configs 5 lines Python
Multi-agent 40+ lines 50+ lines 10 lines
Memory External setup Limited Built-in
Production DIY DIY Included

Lines of Code Comparison

Task LangChain LlamaIndex CrewAI openstackai
Question Answering 15 12 N/A 1
RAG System 25 20 N/A 2
Agent with Tools 30 25 30 5
Multi-Agent Pipeline 50 40 60 10
Research Assistant 45 35 50 1

๐Ÿญ Software Factories

A Software Factory is a system that generates software, not just code snippets.

%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': 'transparent', 'primaryTextColor': '#ffffff', 'primaryBorderColor': '#ffffff', 'lineColor': '#ffffff', 'secondaryColor': 'transparent', 'tertiaryColor': 'transparent', 'background': 'transparent', 'mainBkg': 'transparent', 'nodeBorder': '#ffffff', 'clusterBkg': 'transparent', 'clusterBorder': '#ffffff', 'titleColor': '#ffffff', 'edgeLabelBackground': 'transparent', 'nodeTextColor': '#ffffff'}}}%%
flowchart LR
    subgraph Traditional["Traditional Development"]
        T1["๐Ÿ“ Write"] --> T2["๐Ÿ› Debug"] --> T3["๐Ÿ“‹ Test"] --> T4["๐Ÿ“– Document"]
    end
    
    subgraph Factory["Software Factory"]
        F1["๐Ÿ’ฌ Describe"] --> F2["๐Ÿญ Generate"] --> F3["โœ… Validate"] --> F4["๐Ÿš€ Deploy"]
    end
Loading
Aspect Traditional Software Factory
Input Code Natural Language
Process Manual Writing AI Generation
Testing Manual Auto-generated
Debugging Line by line Self-healing
Time Hours/Days Seconds/Minutes

๐Ÿ“š Architecture Overview

%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': 'transparent', 'primaryTextColor': '#ffffff', 'primaryBorderColor': '#ffffff', 'lineColor': '#ffffff', 'secondaryColor': 'transparent', 'tertiaryColor': 'transparent', 'background': 'transparent', 'mainBkg': 'transparent', 'nodeBorder': '#ffffff', 'clusterBkg': 'transparent', 'clusterBorder': '#ffffff', 'titleColor': '#ffffff', 'edgeLabelBackground': 'transparent', 'nodeTextColor': '#ffffff'}}}%%
flowchart TB
    subgraph Application["YOUR APPLICATION"]
        APP["๐Ÿ–ฅ๏ธ App Layer"]
    end
    
    subgraph SDK["openstackai SDK - src/openstackai/"]
        subgraph Easy["๐Ÿš€ easy/"]
            E1["ask โ€ข research โ€ข summarize"]
            E2["rag โ€ข generate โ€ข translate"]
            E3["fetch โ€ข analyze โ€ข code"]
            E4["handoff โ€ข guardrails โ€ข trace"]
        end
        
        subgraph Core["๐Ÿง  core/"]
            C1["Agent"]
            C2["LLMProvider"]
            C3["Memory"]
        end
        
        subgraph Runner["โšก runner/"]
            R1["Runner"]
            R2["StreamingRunner"]
        end
        
        subgraph Blueprint["๐Ÿ”— blueprint/"]
            B1["Workflow"]
            B2["Patterns"]
        end
        
        subgraph Skills["๐Ÿ› ๏ธ skills/"]
            S1["@tool decorator"]
            S2["SkillRegistry"]
        end
        
        subgraph Kernel["๐Ÿ”Œ kernel/"]
            K1["Kernel"]
            K2["ServiceRegistry"]
        end
    end
    
    subgraph Providers["LLM PROVIDERS"]
        P1["Azure OpenAI"]
        P2["OpenAI"]
        P3["Anthropic"]
        P4["Ollama"]
    end
    
    Application --> SDK
    SDK --> Providers
Loading

๐Ÿ“ฆ Complete Module Reference

File Structure

src/openstackai/
โ”œโ”€โ”€ easy/           # One-liner APIs (15+ functions)
โ”œโ”€โ”€ core/           # Agent, Memory, LLM providers
โ”œโ”€โ”€ runner/         # Execution engine
โ”œโ”€โ”€ blueprint/      # Workflows and patterns
โ”œโ”€โ”€ skills/         # Tools and skills system
โ”œโ”€โ”€ kernel/         # Service registry (SK pattern)
โ”œโ”€โ”€ sessions/       # SQLite/Redis persistence
โ”œโ”€โ”€ evaluation/     # Agent testing framework
โ”œโ”€โ”€ voice/          # Real-time voice
โ”œโ”€โ”€ multimodal/     # Image, audio, video
โ”œโ”€โ”€ vectordb/       # Vector database connectors
โ”œโ”€โ”€ openapi/        # OpenAPI tool generation
โ”œโ”€โ”€ plugins/        # Plugin architecture
โ”œโ”€โ”€ a2a/            # Agent-to-Agent protocol
โ”œโ”€โ”€ config/         # YAML configuration
โ”œโ”€โ”€ tokens/         # Token counting
โ””โ”€โ”€ tools/          # Built-in tools

๐ŸŽฏ One-Liner APIs (easy/ module)

The easy/ module provides 15+ one-liner APIs that handle complex AI tasks with zero setup.

%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': 'transparent', 'primaryTextColor': '#ffffff', 'primaryBorderColor': '#ffffff', 'lineColor': '#ffffff', 'secondaryColor': 'transparent', 'tertiaryColor': 'transparent', 'background': 'transparent', 'mainBkg': 'transparent', 'nodeBorder': '#ffffff', 'clusterBkg': 'transparent', 'clusterBorder': '#ffffff', 'titleColor': '#ffffff', 'edgeLabelBackground': 'transparent', 'nodeTextColor': '#ffffff'}}}%%
flowchart TB
    subgraph QA["Question Answering"]
        ASK["ask()"]
        RES["research()"]
    end
    
    subgraph Content["Content Processing"]
        SUM["summarize()"]
        TRANS["translate()"]
        EXT["extract()"]
        GEN["generate()"]
    end
    
    subgraph Knowledge["Knowledge Management"]
        RAGA["rag.index()"]
        RAGQ["rag.ask()"]
    end
    
    subgraph RealTime["Real-Time Data"]
        FW["fetch.weather()"]
        FN["fetch.news()"]
        FS["fetch.stock()"]
    end
    
    subgraph Code["Code Operations"]
        CW["code.write()"]
        CR["code.review()"]
        CD["code.debug()"]
        CT["code.test()"]
    end
    
    subgraph Analysis["Analysis"]
        AS["analyze.sentiment()"]
        AE["analyze.entities()"]
        AC["analyze.classify()"]
    end
Loading

ask() โ€” Universal Question Answering

The foundation of openstackai. Ask any question, get an intelligent answer.

from openstackai import ask

# Simple questions
answer = ask("What is Python?")

# Detailed responses
answer = ask("Explain quantum computing", detailed=True)

# Formatted output
answer = ask("List 5 programming tips", format="bullet")

# With context
answer = ask("What does this code do?", context=my_code)

# Async version
answer = await ask_async("What is AI?")

research() โ€” Deep Topic Research

Multi-step research with automatic source gathering and synthesis.

from openstackai import research

# Basic research
result = research("AI trends in enterprise software")

# Access structured results
print(result.summary)        # Executive summary
print(result.key_points)     # Bullet points
print(result.insights)       # Deep analysis
print(result.sources)        # References

# Research with specific focus
result = research(
    topic="Machine learning in healthcare",
    depth="comprehensive",
    max_sources=10
)

summarize() โ€” Document Summarization

Summarize any content: text, files, URLs.

from openstackai import summarize

# Text summarization
summary = summarize(long_document)

# File summarization (PDF, Word, etc.)
summary = summarize("./report.pdf")

# URL summarization
summary = summarize("https://example.com/article")

# Custom length
summary = summarize(text, length="short")    # ~2 sentences
summary = summarize(text, length="medium")   # ~1 paragraph
summary = summarize(text, length="long")     # Detailed

rag โ€” Retrieval-Augmented Generation

Production-ready RAG in 2 lines.

from openstackai import rag

# Index documents
knowledge = rag.index("./documents")

# Query the knowledge base
answer = knowledge.ask("What is the main conclusion?")

# With source attribution
result = knowledge.ask("What were the key findings?", return_sources=True)
print(result.answer)
print(result.sources)

# Multiple document types
rag.index(["./pdfs", "./markdown", "./code"])

generate() โ€” Content Generation

Generate any type of content.

from openstackai import generate

# Code generation
code = generate("fibonacci function", type="code")
api = generate("REST API for user management", type="code", language="python")

# Email generation
email = generate("polite rejection email", type="email")

# Article generation
article = generate("Introduction to AI", type="article", length="1000 words")

# Custom types
plan = generate("project plan for mobile app", type="plan")

translate() โ€” Language Translation

from openstackai import translate

# Simple translation
spanish = translate("Hello, how are you?", to="spanish")
japanese = translate("Good morning", to="japanese")

# Detect and translate
result = translate(unknown_text, to="english")
print(result.detected_language)  # "french"
print(result.translated)         # English text

# Preserve formatting
translated_doc = translate(markdown_text, to="german", preserve_format=True)

extract() โ€” Structured Data Extraction

Extract structured data from unstructured text.

from openstackai import extract

# Extract specific fields
data = extract(email_text, fields=["sender", "date", "subject", "action_items"])

# With types
data = extract(invoice, fields={
    "vendor": "string",
    "amount": "float",
    "date": "date",
    "line_items": "list"
})

# Entity extraction
entities = extract(article, fields=["people", "organizations", "locations"])

fetch โ€” Real-Time Data

Access live data feeds.

from openstackai import fetch

# Weather data
weather = fetch.weather("New York")
print(weather.temperature)
print(weather.conditions)

# News
headlines = fetch.news("artificial intelligence")
for article in headlines:
    print(article.title, article.source)

# Stock data
stock = fetch.stock("AAPL")
print(stock.price, stock.change)

# Web content
content = fetch.url("https://example.com")

analyze โ€” Data Analysis

from openstackai import analyze

# Sentiment analysis
result = analyze.sentiment("I love this product!")
print(result.label)     # "positive"
print(result.score)     # 0.95

# Entity recognition
entities = analyze.entities("Apple CEO Tim Cook announced...")
# [{"text": "Apple", "type": "ORG"}, {"text": "Tim Cook", "type": "PERSON"}]

# Classification
category = analyze.classify(text, categories=["tech", "sports", "politics"])

# Comparison
comparison = analyze.compare(text1, text2)
print(comparison.similarity)
print(comparison.differences)

code โ€” Code Operations

AI-powered code assistant.

from openstackai import code

# Write code
implementation = code.write("binary search tree in Python")
api = code.write("FastAPI CRUD endpoints for users", framework="fastapi")

# Review code
review = code.review(my_code)
print(review.issues)
print(review.suggestions)
print(review.score)

# Debug errors
fix = code.debug("TypeError: 'NoneType' object is not subscriptable", context=my_code)
print(fix.explanation)
print(fix.solution)

# Generate tests
tests = code.test(my_function)
print(tests.test_cases)

# Refactor
improved = code.refactor(legacy_code, goal="async/await pattern")

# Explain code
explanation = code.explain(complex_function)

handoff() โ€” Agent Delegation

Transfer tasks between agents.

from openstackai import handoff

# Transfer to specialist
result = handoff(
    task="Complex legal analysis",
    to_agent=legal_specialist,
    context=case_details
)

# With routing
result = handoff(
    task=user_request,
    routes={
        "code": coder_agent,
        "math": calculator_agent,
        "writing": writer_agent
    }
)

guardrails() โ€” Safety Wrappers

from openstackai.easy import guardrails

# Wrap any function with safety
safe_ask = guardrails.wrap(ask, block_pii=True, block_harmful=True)

# Custom validators
safe_generate = guardrails.wrap(generate, 
    validators=[no_code_execution, family_friendly])

# Rate limiting
limited_ask = guardrails.wrap(ask, rate_limit="10/minute")

trace() โ€” Debugging & Observability

from openstackai.easy import trace

# Enable tracing
trace.enable()

# Run your code
result = ask("What is AI?")
research_result = research("Machine learning")

# View traces
trace.show()
# Displays: tokens used, latency, model calls, cost

# Export for analysis
trace.export("traces.json")

๐Ÿค– Agent Framework (core/ module)

The core/ module provides the foundational building blocks for intelligent agents.

%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': 'transparent', 'primaryTextColor': '#ffffff', 'primaryBorderColor': '#ffffff', 'lineColor': '#ffffff', 'secondaryColor': 'transparent', 'tertiaryColor': 'transparent', 'background': 'transparent', 'mainBkg': 'transparent', 'nodeBorder': '#ffffff', 'clusterBkg': 'transparent', 'clusterBorder': '#ffffff', 'titleColor': '#ffffff', 'edgeLabelBackground': 'transparent', 'nodeTextColor': '#ffffff'}}}%%
classDiagram
    class Agent {
        +name: str
        +instructions: str
        +tools: List~Tool~
        +memory: Memory
        +model: str
        +run(input) RunResult
    }
    
    class AgentConfig {
        +model: str
        +temperature: float
        +max_tokens: int
        +tools: List
    }
    
    class Memory {
        +add(message)
        +get_context()
        +clear()
    }
    
    class LLMProvider {
        +generate(prompt) Response
        +stream(prompt) AsyncIterator
    }
    
    Agent --> AgentConfig
    Agent --> Memory
    Agent --> LLMProvider
Loading

Agent Execution Flow

%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': 'transparent', 'primaryTextColor': '#ffffff', 'primaryBorderColor': '#ffffff', 'lineColor': '#ffffff', 'secondaryColor': 'transparent', 'tertiaryColor': 'transparent', 'background': 'transparent', 'mainBkg': 'transparent', 'nodeBorder': '#ffffff', 'clusterBkg': 'transparent', 'clusterBorder': '#ffffff', 'titleColor': '#ffffff', 'edgeLabelBackground': 'transparent', 'nodeTextColor': '#ffffff'}}}%%
sequenceDiagram
    participant User
    participant Runner
    participant Agent
    participant Memory
    participant LLM
    participant Tools
    
    User->>Runner: run_sync(agent, "Query")
    Runner->>Agent: execute(input)
    Agent->>Memory: get_context()
    Memory-->>Agent: conversation_history
    Agent->>LLM: generate(prompt + context)
    LLM-->>Agent: response + tool_calls
    
    alt Has Tool Calls
        loop For each tool call
            Agent->>Tools: execute(tool_call)
            Tools-->>Agent: result
        end
        Agent->>LLM: generate(with tool results)
        LLM-->>Agent: final response
    end
    
    Agent->>Memory: add(input, response)
    Agent-->>Runner: RunResult
    Runner-->>User: result.final_output
Loading

Creating Agents

from openstackai import Agent, Runner
from openstackai.skills import tool

# Define custom tools
@tool(description="Get current weather for a city")
async def get_weather(city: str) -> str:
    """Fetch weather data for the specified city."""
    return f"Weather in {city}: Sunny, 72ยฐF"

@tool(description="Search the knowledge base")
async def search_kb(query: str) -> str:
    """Search internal knowledge base."""
    return f"Found 3 results for '{query}'"

# Create the agent
agent = Agent(
    name="WeatherBot",
    instructions="""You are a helpful weather assistant.
    Always provide accurate weather information.
    If asked about other topics, politely redirect to weather.""",
    tools=[get_weather, search_kb],
    model="gpt-4o-mini"
)

# Run synchronously
result = Runner.run_sync(agent, "What's the weather in Tokyo?")
print(result.final_output)

# Run asynchronously
result = await Runner.run(agent, "Weather in Paris?")
print(result.final_output)

Agent Configuration

from openstackai import Agent
from openstackai.core import AgentConfig

# Detailed configuration
config = AgentConfig(
    model="gpt-4o",
    temperature=0.7,
    max_tokens=1000,
    top_p=0.9,
    presence_penalty=0.1,
    frequency_penalty=0.1
)

agent = Agent(
    name="Analyst",
    instructions="Analyze data thoroughly.",
    config=config
)

Memory Management

from openstackai import Agent
from openstackai.core import ConversationMemory, SlidingWindowMemory

# Conversation memory (keeps all messages)
agent = Agent(
    name="Assistant",
    instructions="Help users.",
    memory=ConversationMemory()
)

# Sliding window (keeps last N messages)
agent = Agent(
    name="Assistant",
    instructions="Help users.",
    memory=SlidingWindowMemory(window_size=10)
)

# Access memory
agent.memory.add("user", "Hello")
agent.memory.add("assistant", "Hi there!")
context = agent.memory.get_context()

Streaming Responses

from openstackai import Agent, Runner

agent = Agent(name="Assistant", instructions="Be helpful.")

# Stream tokens as they arrive
async for chunk in Runner.stream(agent, "Tell me a story"):
    print(chunk, end="", flush=True)

๐Ÿ”— Multi-Agent Systems (blueprint/ module)

The blueprint/ module enables sophisticated multi-agent orchestration.

%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': 'transparent', 'primaryTextColor': '#ffffff', 'primaryBorderColor': '#ffffff', 'lineColor': '#ffffff', 'secondaryColor': 'transparent', 'tertiaryColor': 'transparent', 'background': 'transparent', 'mainBkg': 'transparent', 'nodeBorder': '#ffffff', 'clusterBkg': 'transparent', 'clusterBorder': '#ffffff', 'titleColor': '#ffffff', 'edgeLabelBackground': 'transparent', 'nodeTextColor': '#ffffff'}}}%%
flowchart TB
    subgraph Patterns["Available Patterns"]
        direction TB
        P1["๐Ÿ”— Chain<br/>Sequential Processing"]
        P2["๐Ÿ”€ Router<br/>Dynamic Routing"]
        P3["๐Ÿ“Š MapReduce<br/>Parallel Processing"]
        P4["๐Ÿ‘” Supervisor<br/>Managed Workers"]
        P5["๐Ÿ”„ Loop<br/>Iterative Refinement"]
    end
Loading

Architecture Patterns

%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': 'transparent', 'primaryTextColor': '#ffffff', 'primaryBorderColor': '#ffffff', 'lineColor': '#ffffff', 'secondaryColor': 'transparent', 'tertiaryColor': 'transparent', 'background': 'transparent', 'mainBkg': 'transparent', 'nodeBorder': '#ffffff', 'clusterBkg': 'transparent', 'clusterBorder': '#ffffff', 'titleColor': '#ffffff', 'edgeLabelBackground': 'transparent', 'nodeTextColor': '#ffffff'}}}%%
flowchart LR
    subgraph Chain["Chain Pattern"]
        CA1["๐Ÿ“ Draft"] --> CA2["โœ๏ธ Edit"] --> CA3["โœ… Review"]
    end
    
    subgraph Router["Router Pattern"]
        RR["๐Ÿ”€ Router"] --> RA1["๐Ÿ’ป Code"]
        RR --> RA2["๐Ÿ“ Math"]
        RR --> RA3["๐Ÿ“ Writing"]
    end
Loading
%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': 'transparent', 'primaryTextColor': '#ffffff', 'primaryBorderColor': '#ffffff', 'lineColor': '#ffffff', 'secondaryColor': 'transparent', 'tertiaryColor': 'transparent', 'background': 'transparent', 'mainBkg': 'transparent', 'nodeBorder': '#ffffff', 'clusterBkg': 'transparent', 'clusterBorder': '#ffffff', 'titleColor': '#ffffff', 'edgeLabelBackground': 'transparent', 'nodeTextColor': '#ffffff'}}}%%
flowchart TB
    subgraph MapReduce["MapReduce Pattern"]
        MR1["๐Ÿ“„ Doc 1"] --> MAP1["Analyzer"]
        MR2["๐Ÿ“„ Doc 2"] --> MAP2["Analyzer"]
        MR3["๐Ÿ“„ Doc 3"] --> MAP3["Analyzer"]
        MAP1 --> RED["Synthesizer"]
        MAP2 --> RED
        MAP3 --> RED
    end
    
    subgraph Supervisor["Supervisor Pattern"]
        SUP["๐Ÿ‘” Manager"] --> SW1["Worker 1"]
        SUP --> SW2["Worker 2"]
        SUP --> SW3["Worker 3"]
        SW1 -.-> SUP
        SW2 -.-> SUP
        SW3 -.-> SUP
    end
Loading

Workflow Definition

from openstackai import Agent
from openstackai.blueprint import Workflow, Step

# Create specialized agents
researcher = Agent(
    name="Researcher",
    instructions="Research topics thoroughly. Return structured findings."
)

writer = Agent(
    name="Writer",
    instructions="Write engaging content based on research."
)

editor = Agent(
    name="Editor",
    instructions="Edit and polish content for clarity."
)

# Build sequential workflow
workflow = (Workflow("ContentPipeline")
    .add_step(Step("research", researcher, output_key="research"))
    .add_step(Step("write", writer, input_key="research", output_key="draft"))
    .add_step(Step("edit", editor, input_key="draft", output_key="final"))
    .build())

# Execute
result = await workflow.run("Write about AI in healthcare")
print(result.outputs["final"])

Chain Pattern

from openstackai.blueprint import ChainPattern

# Create a chain of agents
chain = ChainPattern([
    ("draft", drafter),
    ("review", reviewer),
    ("polish", editor)
])

# Output of each agent feeds into the next
result = await chain.run("Create a product announcement")

Router Pattern

from openstackai.blueprint import RouterPattern

# Create router with specialized agents
router = RouterPattern()
router.add_route("code", code_agent, keywords=["python", "javascript", "bug"])
router.add_route("math", math_agent, keywords=["calculate", "equation", "number"])
router.add_route("writing", writer_agent, keywords=["write", "essay", "email"])
router.add_route("default", general_agent)

# Router automatically selects the right agent
result = await router.run("Fix this Python bug: ...")
# -> Routes to code_agent

result = await router.run("Calculate 234 * 567")
# -> Routes to math_agent

MapReduce Pattern

from openstackai.blueprint import MapReducePattern

# Analyze multiple documents in parallel
analyzer = Agent(name="Analyzer", instructions="Analyze document content.")
synthesizer = Agent(name="Synthesizer", instructions="Synthesize findings.")

map_reduce = MapReducePattern(
    mapper=analyzer,
    reducer=synthesizer
)

documents = ["doc1.txt", "doc2.txt", "doc3.txt"]
result = await map_reduce.run(documents)
# Analyzes all docs in parallel, then synthesizes

Supervisor Pattern

from openstackai.blueprint import SupervisorPattern

# Manager delegates to workers
manager = Agent(
    name="Manager",
    instructions="Delegate tasks and synthesize results."
)

workers = [
    Agent(name="Coder", instructions="Write code."),
    Agent(name="Tester", instructions="Write tests."),
    Agent(name="Documenter", instructions="Write docs.")
]

supervisor = SupervisorPattern(manager=manager, workers=workers)
result = await supervisor.run("Build a calculator module")

---

## ๐Ÿ”Œ Kernel Registry (`kernel/` module)

Microsoft Semantic Kernel-style service management:

```mermaid
%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': 'transparent', 'primaryTextColor': '#ffffff', 'primaryBorderColor': '#ffffff', 'lineColor': '#ffffff', 'secondaryColor': 'transparent', 'tertiaryColor': 'transparent', 'background': 'transparent', 'mainBkg': 'transparent', 'nodeBorder': '#ffffff', 'clusterBkg': 'transparent', 'clusterBorder': '#ffffff', 'titleColor': '#ffffff', 'edgeLabelBackground': 'transparent', 'nodeTextColor': '#ffffff'}}}%%
flowchart TB
    subgraph Kernel["Kernel"]
        SR["ServiceRegistry"]
        FR["FilterRegistry"]
        PR["PluginRegistry"]
        
        SR --> LLM1["GPT-4"]
        SR --> LLM2["Claude"]
        SR --> MEM["Redis Memory"]
        
        PR --> P1["WeatherPlugin"]
        PR --> P2["SearchPlugin"]
        
        FR --> F1["LoggingFilter"]
        FR --> F2["ValidationFilter"]
    end
from openstackai.kernel import Kernel, KernelBuilder

kernel = (KernelBuilder()
    .add_llm(openai_client, name="gpt4", is_default=True)
    .add_llm(azure_client, name="azure")
    .add_memory(redis_memory)
    .add_plugin(WeatherPlugin())
    .build())

result = await kernel.invoke("weather", "get_weather", city="NYC")

๐Ÿข Enterprise Features

openstackai is built for production. Every feature you need to deploy AI at scale.

%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': 'transparent', 'primaryTextColor': '#ffffff', 'primaryBorderColor': '#ffffff', 'lineColor': '#ffffff', 'secondaryColor': 'transparent', 'tertiaryColor': 'transparent', 'background': 'transparent', 'mainBkg': 'transparent', 'nodeBorder': '#ffffff', 'clusterBkg': 'transparent', 'clusterBorder': '#ffffff', 'titleColor': '#ffffff', 'edgeLabelBackground': 'transparent', 'nodeTextColor': '#ffffff'}}}%%
flowchart TB
    subgraph Enterprise["Enterprise Features"]
        direction TB
        AUTH["๐Ÿ” Azure AD<br/>Authentication"]
        SESS["๐Ÿ’พ Session<br/>Management"]
        EVAL["๐Ÿ“Š Testing &<br/>Evaluation"]
        TRACE["๐Ÿ“ Tracing &<br/>Observability"]
        GUARD["๐Ÿ›ก๏ธ Guardrails &<br/>Safety"]
        MONITOR["๐Ÿ“ˆ Monitoring &<br/>Analytics"]
    end
Loading

๐Ÿ” Azure AD Authentication

Seamless integration with Azure Active Directory. No API keys needed in production.

%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': 'transparent', 'primaryTextColor': '#ffffff', 'primaryBorderColor': '#ffffff', 'lineColor': '#ffffff', 'secondaryColor': 'transparent', 'tertiaryColor': 'transparent', 'background': 'transparent', 'mainBkg': 'transparent', 'nodeBorder': '#ffffff', 'clusterBkg': 'transparent', 'clusterBorder': '#ffffff', 'titleColor': '#ffffff', 'edgeLabelBackground': 'transparent', 'nodeTextColor': '#ffffff'}}}%%
sequenceDiagram
    participant App
    participant openstackai
    participant AzureAD["Azure AD"]
    participant AOAI["Azure OpenAI"]
    
    App->>openstackai: ask("question")
    openstackai->>AzureAD: Get token (DefaultAzureCredential)
    AzureAD-->>openstackai: Bearer token
    openstackai->>AOAI: API call with token
    AOAI-->>openstackai: Response
    openstackai-->>App: Answer
Loading
import os

# Configure Azure OpenAI (no API key needed!)
os.environ["AZURE_OPENAI_ENDPOINT"] = "https://your-resource.openai.azure.com/"
os.environ["AZURE_OPENAI_DEPLOYMENT"] = "gpt-4o-mini"

from openstackai import ask

# Uses your az login credentials or Managed Identity automatically
answer = ask("Hello from Azure!")

Supported Authentication Methods:

  • az login (Developer workstations)
  • Managed Identity (Azure VMs, App Service, AKS)
  • Service Principal (CI/CD pipelines)
  • Workload Identity (Kubernetes)

๐Ÿ’พ Session Management

Persistent conversation history with SQLite or Redis backends.

%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': 'transparent', 'primaryTextColor': '#ffffff', 'primaryBorderColor': '#ffffff', 'lineColor': '#ffffff', 'secondaryColor': 'transparent', 'tertiaryColor': 'transparent', 'background': 'transparent', 'mainBkg': 'transparent', 'nodeBorder': '#ffffff', 'clusterBkg': 'transparent', 'clusterBorder': '#ffffff', 'titleColor': '#ffffff', 'edgeLabelBackground': 'transparent', 'nodeTextColor': '#ffffff'}}}%%
flowchart LR
    subgraph App["Application"]
        U1["User A"]
        U2["User B"]
        U3["User C"]
    end
    
    subgraph Sessions["SessionManager"]
        SM["Session<br/>Manager"]
    end
    
    subgraph Storage["Storage Backend"]
        SQL["SQLite<br/>sessions.db"]
        RED["Redis<br/>Cluster"]
    end
    
    U1 --> SM
    U2 --> SM
    U3 --> SM
    SM --> SQL
    SM --> RED
Loading
from openstackai.sessions import SessionManager, SQLiteSessionStore, RedisSessionStore

# SQLite for development
manager = SessionManager(store=SQLiteSessionStore("sessions.db"))

# Redis for production
manager = SessionManager(store=RedisSessionStore(
    host="redis.example.com",
    port=6379,
    password="secret"
))

# Create and use sessions
session = await manager.create(user_id="user123")
session.add_message("user", "Hello")
session.add_message("assistant", "Hi there!")

# Resume later
session = await manager.get(session_id="abc123")
history = session.get_messages()

# Session with agent
from openstackai import Agent, Runner

agent = Agent(name="Assistant", instructions="Be helpful.")
result = await Runner.run(agent, "Hello", session=session)
# Automatically maintains conversation history

๐Ÿ“Š Evaluation Framework

Test your agents systematically.

from openstackai.evaluation import Evaluator, EvalSet, TestCase, metrics

# Define test cases
eval_set = EvalSet([
    TestCase(
        input="What is 2+2?",
        expected="4",
        tags=["math"]
    ),
    TestCase(
        input="Capital of France?",
        expected="Paris",
        tags=["geography"]
    ),
    TestCase(
        input="Write a haiku about coding",
        expected_pattern=r".*\n.*\n.*",  # 3 lines
        tags=["creative"]
    )
])

# Run evaluation
evaluator = Evaluator(agent)
results = await evaluator.run(eval_set)

# View results
print(f"Pass rate: {results.pass_rate}%")
print(f"Average latency: {results.avg_latency}ms")

for result in results.failed:
    print(f"Failed: {result.input}")
    print(f"Expected: {result.expected}")
    print(f"Got: {result.actual}")

๐Ÿ“ Tracing & Observability

Full visibility into agent operations.

from openstackai.easy import trace

# Enable tracing
trace.enable()

# Run operations
result = ask("Explain quantum computing")
research_result = research("AI in healthcare")

# View traces
trace.show()
# Output:
# โ”Œโ”€ ask("Explain quantum computing")
# โ”‚  Model: gpt-4o-mini
# โ”‚  Tokens: 45 in, 230 out
# โ”‚  Latency: 1.2s
# โ”‚  Cost: $0.0012
# โ””โ”€

# Export for external tools
trace.export("traces.json")
trace.export_to_opentelemetry()

๐Ÿ›ก๏ธ Guardrails & Safety

Built-in protection for production deployments.

from openstackai.easy import guardrails

# PII protection
safe_ask = guardrails.wrap(ask, block_pii=True)
# Blocks: SSNs, credit cards, phone numbers

# Content filtering
safe_generate = guardrails.wrap(generate, 
    block_harmful=True,
    block_adult=True
)

# Custom validators
def no_financial_advice(response):
    if "invest" in response.lower():
        return False, "Cannot provide investment advice"
    return True, None

safe_ask = guardrails.wrap(ask, validators=[no_financial_advice])

# Rate limiting
limited_ask = guardrails.wrap(ask, rate_limit="100/hour")

# Token limits
bounded_ask = guardrails.wrap(ask, max_tokens=500)

๐Ÿ”— Integrations

%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': 'transparent', 'primaryTextColor': '#ffffff', 'primaryBorderColor': '#ffffff', 'lineColor': '#ffffff', 'secondaryColor': 'transparent', 'tertiaryColor': 'transparent', 'background': 'transparent', 'mainBkg': 'transparent', 'nodeBorder': '#ffffff', 'clusterBkg': 'transparent', 'clusterBorder': '#ffffff', 'titleColor': '#ffffff', 'edgeLabelBackground': 'transparent', 'nodeTextColor': '#ffffff'}}}%%
flowchart TB
    openstackai["๐Ÿง  openstackai"] --> VEC["Vector DBs"]
    openstackai --> FRAME["Frameworks"]
    openstackai --> PROTO["Protocols"]
    
    VEC --> CH["ChromaDB"]
    VEC --> PC["Pinecone"]
    VEC --> QD["Qdrant"]
    VEC --> AZ["Azure AI Search"]
    
    FRAME --> LC["LangChain"]
    FRAME --> SK["Semantic Kernel"]
    
    PROTO --> MCP["MCP Protocol"]
    PROTO --> A2A["A2A Protocol"]
    PROTO --> OAPI["OpenAPI"]
Loading

๐Ÿ“Š Feature Comparison

Feature openstackai OpenAI Agents Google ADK Semantic Kernel LangChain
One-liner APIs โœ… โŒ โŒ โŒ โŒ
Multi-provider LLM โœ… โŒ โœ… โœ… โœ…
Azure AD Auth โœ… โŒ โŒ โœ… โŒ
Session Management โœ… โœ… โœ… โŒ โœ…
Evaluation Framework โœ… โŒ โœ… โŒ โŒ
Voice Streaming โœ… โœ… โŒ โŒ โŒ
MCP Protocol โœ… โŒ โŒ โŒ โŒ
A2A Protocol โœ… โŒ โœ… โŒ โŒ
Guardrails โœ… โœ… โŒ โŒ โœ…
Workflow Patterns โœ… โŒ โŒ โœ… โœ…
Plugin System โœ… โŒ โŒ โœ… โŒ
YAML Config โœ… โŒ โœ… โŒ โŒ

๐Ÿš€ Get Started

Installation

pip install openstackai                # Basic
pip install openstackai[openai]        # OpenAI
pip install openstackai[azure]         # Azure + Azure AD
pip install openstackai[all]           # Everything

Hello World

from openstackai import ask

answer = ask("What is the capital of France?")
print(answer)  # Paris

Configuration

# OpenAI
export OPENAI_API_KEY=sk-your-key

# Azure OpenAI (Azure AD - no key needed!)
export AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com/
export AZURE_OPENAI_DEPLOYMENT=gpt-4o-mini

๐Ÿ’ก Design Philosophy

%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': 'transparent', 'primaryTextColor': '#ffffff', 'primaryBorderColor': '#ffffff', 'lineColor': '#ffffff', 'secondaryColor': 'transparent', 'tertiaryColor': 'transparent', 'background': 'transparent', 'mainBkg': 'transparent', 'nodeBorder': '#ffffff', 'clusterBkg': 'transparent', 'clusterBorder': '#ffffff', 'titleColor': '#ffffff', 'edgeLabelBackground': 'transparent', 'nodeTextColor': '#ffffff'}}}%%
flowchart LR
    subgraph Philosophy["Design Principles"]
        P1["๐ŸŽฏ Simplicity First"]
        P2["๐Ÿ”‹ Batteries Included"]
        P3["๐Ÿ“ Progressive Complexity"]
        P4["๐Ÿง  Intelligence as Infrastructure"]
        P5["๐Ÿ”ง Composability"]
    end
Loading
Principle Description
Simplicity First One line should accomplish one task
Batteries Included Everything you need, out of the box
Progressive Complexity Start simple, scale up when needed
Intelligence as Infrastructure AI is foundation, not feature
Composability Small pieces combine into powerful systems

๐Ÿ‘ฅ Community & Documentation


๐Ÿ”ฎ The openstackai Product Suite

%%{init: {'theme': 'base', 'themeVariables': { 'primaryColor': 'transparent', 'primaryTextColor': '#ffffff', 'primaryBorderColor': '#ffffff', 'lineColor': '#ffffff', 'secondaryColor': 'transparent', 'tertiaryColor': 'transparent', 'background': 'transparent', 'mainBkg': 'transparent', 'nodeBorder': '#ffffff', 'clusterBkg': 'transparent', 'clusterBorder': '#ffffff', 'titleColor': '#ffffff', 'edgeLabelBackground': 'transparent', 'nodeTextColor': '#ffffff'}}}%%
flowchart LR
    subgraph Available["โœ… Available Now"]
        PA["๐Ÿค– openstackai<br/>Core SDK"]
    end
    
    subgraph Soon["๐Ÿ”œ Coming Soon"]
        PF["๐Ÿ”„ PyFlow<br/>Visual Workflows"]
        PV["๐Ÿ‘๏ธ PyVision<br/>Computer Vision"]
        PVO["๐ŸŽค PyVoice<br/>Speech & Audio"]
    end
    
    subgraph Future["๐Ÿ”ฎ Future"]
        PFAC["๐Ÿญ PyFactory<br/>Software Generation"]
        PM["๐Ÿง  PyMind<br/>Autonomous Reasoning"]
    end
    
    Available --> Soon --> Future
Loading
Product Purpose Dimension Status
๐Ÿค– openstackai Core Intelligence SDK All โœ… Available
๐Ÿ”„ PyFlow Visual AI Workflows Orchestration ๐Ÿ”œ Coming Soon
๐Ÿ‘๏ธ PyVision Computer Vision Cognition ๐Ÿ”œ Coming Soon
๐ŸŽค PyVoice Speech & Audio Cognition ๐Ÿ”œ Coming Soon
๐Ÿญ PyFactory Software Generation Creation ๐Ÿ”ฎ Future
๐Ÿง  PyMind Autonomous Reasoning Creation ๐Ÿ”ฎ Future

๐Ÿ“œ License

MIT License โ€” Build freely, build boldly.


๐Ÿง  openstackai
Intelligence, Embedded.

25+ Modules โ€ข 150+ Classes โ€ข 671 Tests โ€ข Infinite Possibilities

Built with ๐Ÿง  by the openstackai team

About

What SAS did for statistics, PYAI does for the GenAI era - The AI Library Revolution

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors