-
Notifications
You must be signed in to change notification settings - Fork 0
Overview
Tars is the ultimate AI-powered coding companion for software engineers—refined, multi-stage code generation and validation, plus reinforcement learning to continuously improve your results.
CodeMasterPro v1.1.1
- Features
- Requirements
- Installation
- Configuration
- Usage
- Example Prompts
- Contribution
- License
- Disclaimer
-
Code Assistant
Syntax help, debugging tips, and deep dives into tricky concepts. -
Multi-Stage Generation
Every snippet runs through three stages (draft → refine → validate) to catch errors early. -
Reinforcement Learning “Thinker”
Tars learns from your feedback—rewarding good answers and penalizing bad ones. -
StackOverflow Integration Allows you to search with StackOverflow for quick snippets and answers
-
Chat History & Restorations Allows you to save chats for later use or restore recent messages (for protection)
-
Fully Customizable Can set custom prompts, request the type of code, any add other preferences
-
Gemini 3 models Allows you to choose from 3 models based on the speed and accuracy
-
FAISS & Web Integration
Instantly search StackOverflow, internals, or Brave Search, and cache results locally. -
Interactive Code Blocks
Click Use on any sample to inject it into your prompt. Quick actions make editing a breeze. -
Saved Snippets Allows you to save snippets for later and quick use
-
Manual Documentation and Automated Web Scaping Allows you to add documentation links or documentations to your local system, links are used to webscrape public and accessible links Then it saves to FAISS for quick access
-
Live HTML & Python Execution
Preview HTML in-browser. Run Python in an isolated venv with retry & self-correction. -
Contextual Memory
Intelligent memory decay and embeddings keep track of your code, dependencies, and intent.
- Docker (20.10+)
- Google API Key (required for Gemini models Unlimited )
- Brave API Key (optional, for web search 2000/per month free
-
Pull the Docker image
# macOS (arm64) docker pull stefankumarasinghe/codemasterpro:latest# Windows (amd64) docker pull stefankumarasinghe/codemasterpro:amd64 -
Run the container
docker run -e GOOGLE_API_KEY=<YOUR_GOOGLE_API_KEY> -e BRAVE_API_KEY=<YOUR_BRAVE_API_KEY> -p 8000:8000 stefankumarasinghe/codemasterpro:tagname
| Env Variable | Required | Description |
|---|---|---|
GOOGLE_API_KEY |
Yes | Your Gemini (Google Generative AI) API key. |
BRAVE_API_KEY |
No | Token for Brave Search integration. |
- Visit:
[https://dwr4zchmi6x24.cloudfront.net/](https://dwr4zchmi6x24.cloudfront.net/) - Say Hello to Tars:
“Hey Tars, explain how async/await works in JavaScript.”
- Copy, refine, and run code directly in the UI.
- Or run HTML and Python directly using Tars
-
Syntax & Debugging
“Why am I getting
TypeError: undefinedin this snippet?” -
Optimization Tips
"Can you give me an MCP server using the documentation in my internal resources"
- Fetches resources
- Fetches Memory (Decay Memory and summary of history)
- Understands user sentiment, if happy with previous answer or not
- Rewards or punishes the RL agent
- First is the generation chain, passes in all relevant and the user sentiment
- Next is the refinement stage
- lastly is the quick validation chain
- If not thinker, then it returns the answer, but it first will save to memory, reward or punish the RL agent based on semantic score and validation score
- If thinker, the RL agent may be greedy and look for a better solution while expanding its memory to get more context, even if reward is high and validation is high. After exhasting 2-4 times, it will get the best outcome and update q value.
- Returns an optimized answer that is then cleaned
- This is for tokens more than 25000 characters or around 500+ lines of code
- Breaks the code into several chunks 2000-5000 character aim
- Process each with context of the code and generated code, starts from top to bottom.
- Retries if validation score is less than 90 out of 100, maximum is 5. Each reducing the context size and reducing noise and sending feedback
- After processing the chunks, the code is constructed to make the big code
- Then it is passed into a lite Gemini model to do basic checks and then it will output the code or analysis
- First checks if code is runnable using a Chain, if not then makes it self-contained
- Then runs through a chain to get the pip installs
- Then it runs the code, if fails it will send error to the chain as feedback
- A closed feedback loop of max tries 5 will occur. Until code is running without errors, does not check for logical issues like lack of features
- Returns the output and the corrected final code
This project is MIT-licensed and open to all.
Feel free to file issues, submit PRs, or suggest new features—just give credit where it’s due!
Upto CodeMaster 1.1.1 was developed soley by Stefan Kumarasinghe
MIT © Stefan Kumarasinghe
This project is not affiliated with or endorsed by any company or organization.
Tars generates code using AI—always review and test before using in production.
Use at your own risk; no warranty is provided.