Easily start your project with the frontier models of OpenAI, Anthropic, Google, and Meta AI.
Report Bug
·
Request Feature
Table of Contents
Welcome to LLM Hub, a streamlined repository designed to facilitate access to the world's leading large language models (LLMs). This project simplifies the integration of major LLMs, primarily through their APIs.
In the initial release, we support:
- GPT models from OpenAI: Harness the power of OpenAI's cutting-edge language models.
- Claude 3 models from Anthropic: Experience the nuanced understanding of Anthropic's Claude 3.
- Gemini models from Google: Leverage Google's advanced Gemini models for robust AI interactions.
- Llama 3 models from Meta AI: Dive into the depth of knowledge encapsulated by Meta's Llama 3.
- Simplicity at its Core: We believe that interacting with frontier AI models should be straightforward and fuss-free. Our interface is crafted to ensure ease of use.
- Intuitive Interaction: Users can easily send inputs and manage dialogue histories, making AI conversations more seamless and effective.
The first thing to set up is your OpenAI, Anthropic, Google, and Groq API keys. You need OpenAI for the GPT models, Anthropic for the Claude 3 models, Google for the Gemini models, and finally Groq for Llama 3 models.
- Get an API key for OpenAI at: https://platform.openai.com/api-keys
- Get directions for how to generate an API key for Anthropic at: https://docs.anthropic.com/claude/reference/getting-started-with-the-api
- Get an API key for Google at: https://aistudio.google.com/app/apikey
- Create a free account and get an API key for Groq at: https://console.groq.com/keys
Next, create four text files called OPENAI_API_KEY.txt, ANTHROPIC_API_KEY.txt, GOOGLE_API_KEY.txt, and GROQ_API_KEY.txt. Paste your respective API keys into the text files.
- Make Python virtual environment
python3.10 -m venv llm-hub-env source llm-hub-env/bin/activate - Clone the repo
git clone https://github.com/mdsunbeam/llm-hub.git cd llm-hub - Install Python packages
pip install -r requirements.txt
This is a simple example of how to send in images and text. Note, Llama 3 is currently not multimodal; you will need to use an embedding model if you want to pass along images.
from llms import GPT, Claude3, Gemini
import cv2
if __name__ == "__main__":
MODELS = {
"OpenAI": ["gpt-4-turbo", "gpt-4o", "gpt-3.5-turbo"],
"Anthropic": ["claude-3-opus-20240229", "claude-3-sonnet-20240229", "claude-3-haiku-20240307"],
"Google": ["gemini-1.5-pro-latest", "gemini-pro", "gemini-pro-vision", "gemini-1.5-flash-latest"],
"Meta": ["llama3-70b-8192", "llama3-8b-8192"]
}
logo = cv2.imread("images/llm-hub-logo.jpg")
system_message = "You are a helpful assistant."
text = "Describe what you see in this image."
gpt4turbo = GPT(model_name=MODELS["OpenAI"][0], system_message=system_message)
gpt4turbo.add_user_message(frame=logo, user_msg=text)
print("GPT4Turbo: ", gpt4turbo.generate_response())
opus = Claude3(model_name=MODELS["Anthropic"][0], system_message=system_message)
opus.add_user_message(frame=logo, user_msg=text)
print("Claude 3 Opus: ", opus.generate_response())
gemini_1_5_pro = Gemini(model_name=MODELS["Google"][0], system_message=system_message)
gemini_1_5_pro.add_user_message(frame=logo, user_msg=text)
print("Gemini 1.5 Pro: ", gemini_1_5_pro.generate_response())from llms import GPT, Claude3, Gemini, Llama3
if __name__ == "__main__":
MODELS = {
"OpenAI": ["gpt-4-turbo", "gpt-4o", "gpt-3.5-turbo"],
"Anthropic": ["claude-3-opus-20240229", "claude-3-sonnet-20240229", "claude-3-haiku-20240307"],
"Google": ["gemini-1.5-pro-latest", "gemini-pro", "gemini-pro-vision", "gemini-1.5-flash-latest"],
"Meta": ["llama3-70b-8192", "llama3-8b-8192"]
}
system_message = "You are a helpful assistant."
text = "When was George Washington born?"
gpt4turbo = GPT(system_message=system_message)
gpt4turbo.add_user_message(frame=None, user_msg=text)
print("GPT4Turbo: ", gpt4turbo.generate_response())
opus = Claude3(system_message=system_message)
opus.add_user_message(frame=None, user_msg=text)
print("Claude 3 Opus: ", opus.generate_response())
gemini_1_5_pro = Gemini(system_message=system_message)
gemini_1_5_pro.add_user_message(frame=None, user_msg=text)
print("Gemini 1.5 Pro: ", gemini_1_5_pro.generate_response())
llama3_70b = Llama3(model_name=MODELS["Meta"][0], system_message=system_message)
llama3_70b.add_user_message(frame=None, user_msg=text)
print("Llama3 70B: ", llama3_70b.generate_response())- Handling of image and text with all frontier models
- Make example of all GPT, Claude 3, and Gemini family of models
- Add tool calling for all the classes
- Add separate, detailed documentation
- Add more context management functions
- Add function to print role messages
- Add function to delete specific role messages
- Add example of context management function
- Llama 3 (set up probably with Groq API)
- Pass arbitrary amount of messages in one go
- Poll all frontier models for the same prompt
- Reproduce results on popular LLM and multimodal datasets
See the open issues for a full list of proposed features (and known issues).
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/new-feature) - Commit your Changes (
git commit -m 'added new feature') - Push to the Branch (
git push origin feature/new-feature) - Open a Pull Request
Distributed under the MIT License. See LICENSE for more information.
@MdSunbeam - mdsunbeam3.14@gmail.com
Project Link: https://github.com/mdsunbeam/llm-hub