Skip to content

NevoleMarek/agentic-graph-exploration

Repository files navigation

agentic-graph-exploration

This repository helps you visualize how agents traverse document structure via tools. An LLM agent explores a graph of linked Markdown documents using two tools—semantic search and reading nodes by id—while a web viewer shows the graph and the agent’s trajectory (which nodes it visited and its reasoning) in real time.

Agentic Graph Exploration

Requirements

Setup

  1. Install dependencies:
    uv sync

Azure OpenAI (required for run, optional for build)

The run command uses Azure OpenAI. The build --summarize option also uses it to generate node summaries. Configure via a .env file (copy from .env.example):

  • AZURE_OPENAI_ENDPOINT – e.g. https://your-resource.openai.azure.com/
  • AZURE_OPENAI_API_KEY – your Azure OpenAI API key
  • AZURE_OPENAI_DEPLOYMENT – deployment name (e.g. gpt-5-mini)
  • OPENAI_API_VERSION – optional; Azure may use a default

You can override the deployment with --model:

uv run age run --model gpt-5-mini

Sample docs

The sample_docs/ folder contains linked markdown files. To use them:

  1. Create a graph from the sample docs:

    uv run age build sample_docs --output graph.json

    Optionally add LLM-generated node summaries (requires Azure OpenAI):

    uv run age build sample_docs --output graph.json --summarize --model gpt-5-mini
  2. Run the application with that graph:

    uv run age run --graph graph.json

Alternatively, build and run in one go (graph is built on startup):

uv run age --docs sample_docs

Example queries (semantic_retrieve + read_nodes)

Try these in the REPL to see the agent find docs by meaning, then read content and linked nodes:

  • "How can I see what the agent is doing and where do debug logs go?"
    The agent uses semantic_retrieve to find docs about debugging/logging, then read_nodes to read those nodes and their linked_node_ids (e.g. logging → concepts) to answer.

  • "I want to install, run my first query, and understand how the graph is built."
    The agent finds getting-started (and possibly concepts) via semantic_retrieve, then uses read_nodes to read full content and follow links to concepts, examples/quickstart, and graph structure.

  • "Starting from the API reference, what docs does it link to and what do they say?"
    semantic_retrieve finds api.md; then the agent uses read_nodes on that node, sees linked_node_ids (e.g. graph.md, index.md), and calls read_nodes again on those ids to traverse the graph via links rather than another search.

  • "I need three things: how to install on macOS, what the docs say about debugging, and what the CLI options are."
    Three subquestions from different regions (guides, topics, reference). The agent uses separate semantic_retrieve calls to jump to each region, then read_nodes as needed to answer each part.

Development

Formatting and Linting

This project uses ruff for both code formatting and linting.

Format code:

uv run ruff format .

Check linting issues:

uv run ruff check .

Auto-fix linting issues:

uv run ruff check --fix .

Testing

This project uses pytest for testing.

Run all tests:

uv run pytest

Type Checking

This project uses ty for type checking.

Run type checker:

uv run ty check

About

Web viewer for an LLM agent traversing a document graph with semantic search and following links.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages