Skip to content

mujahidmahfuz/LLM-Council

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LLM-Council

LLM Council Banner

🧙‍♂️ LLM-Council — Multi-Model Round-Table Intelligence

Created by Mujahid Mahfuz


🎯 Overview

LLM-Council is a multi-agent AI project where instead of using a single large language model, your request is evaluated by a council of multiple LLMs. Each model gives its own opinion, and then a Judge Model analyzes all answers and delivers the final synthesized verdict.

This creates a more accurate, reliable, explainable, and transparent decision system through structured AI collaboration.


🧠 System Design

Round-Table Council (Concept Illustration)

🧙‍♂️ LLM High Council — Multi-Model Round-Table Intelligence System

The LLM High Council is a multi-agent AI system where multiple powerful Large Language Models sit like a round-table council, each giving their own perspective. A final Judge Model evaluates all responses and produces the best final verdict, providing more accurate, reliable, and trustworthy reasoning.


🌀 Concept — Visual Round Table of Models

               [👨‍⚖️ Judge Model 👑]
                         ▲
                         |

┌───────────🧠─────────────┬──────────────🧠──────────────┐ | | |

               ⬇ (Evaluates and synthesizes)
                 👑 Final Verdict Sent to User

Each AI acts as a council member. Judge compares, selects the best reasoning, and synthesizes the final answer.


📦 Models Used


🧠 Backend Logic Key Code — FastAPI Async Multi-Model Pipeline

backend/main.py

tasks = [controlled_fetch(model) for model in MODELS]
results = await asyncio.gather(*tasks)
verdict = await get_council_decision(client, request.prompt, results)


## 🏆 Judge Evaluator

context = f"User Question: {prompt}\n\n"
for item in model_responses:
    context += f"Model {item['model']} said:\n{item['answer']}\n\n"

context += "You are the Head Councilor. Pick the best answer and explain why."
decision = await fetch_model_response(client, judge_model, context)


#🖥 Frontend Rendering

## frontend/App.jsx

<div className="grid">
  {responses.map((item, index) => (
    <div key={index} className="card">
      <h3>{item.model}</h3>
      <p>{item.answer}</p>
    </div>
  ))}
</div>


## frontend/App.css

.grid {
  display: grid;
  grid-template-columns: repeat(auto-fit, minmax(300px, 1fr));
  gap: 20px;
}

# 🚀 Running Locally

## Backend — FastAPI
cd backend
pip install fastapi uvicorn httpx python-dotenv
uvicorn main:app --reload

## Frontend — React + Vite
cd frontend
npm install
npm run dev

## Open in browser:
http://localhost:5173


About

Imagine 4-5 or many more scientists sitting in a round table and among them one being the chairman or judge. That's what llm-council is.LLMs sitting on a council together to decide, by consensus, who among them is the best.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors