LLM-CLI is a command-line interface (CLI) tool designed to interact with large language models (LLMs) and execute shell commands. This tool allows users to send natural language queries to an LLM and receive responses or execute commands interactively.
- Query large language models (LLMs) directly from the command line.
- Interactive chat mode for continuous dialogue with the LLM.
- Execute shell commands, including
cd, and chain commands using&&. - Verbose mode to print raw responses and other debugging information.
- Error handling with detailed feedback for API or shell execution errors.
- Dynamic prompt generation to assist in completing complex shell commands.
Since this project is not deployed on PyPI, you can install it locally using pipx or pip with the built .whl file.
First, clone the repository and navigate to the project directory:
git clone https://github.com/your-repo/llm-cli.git
cd llm-cliBuild the .whl file using Poetry:
poetry buildThis will generate a .whl file inside the dist/ directory.
To install the package with pipx, use the following command:
pipx install dist/llm_cli-0.1.0-py3-none-any.whlThis installs the CLI tool in an isolated environment, making it easy to manage.
Alternatively, you can install the tool using pip:
pip install --user dist/llm_cli-0.1.0-py3-none-any.whlThis installs the package for the current user without needing administrator permissions.
To query the LLM, simply run:
llm -q Your question hereThis will send the query to the LLM and return a response as well as entering an interactive mode where the tool waits for further input. In interactive mode, after each response, you can continue the conversation or type exit to end the session.
LLM-CLI can execute complex shell commands based on natural language requests. The tool breaks down the tasks into multiple steps and asks for confirmation before executing each one.
If you request the following task:
llm create a dir called test and create a shell script that prints hello world and run itThe tool will break the request into commands and execute them step by step, asking for confirmation before each action.
-q, --query: The query you wish to send to the LLM (required for non-interactive usage).-m, --model: Specify the LLM model to use. Can be set via environment variables or passed in the command.-v, --verbose: Output additional information about the request and response.command: Any shell command or query to execute through the CLI.
LLM-CLI uses the OPENAI_API_KEY environment variable for API access.
In your terminal, set the OPENAI_API_KEY:
export OPENAI_API_KEY="your-openai-api-key"LLM-CLI will check if the API key is set before sending any queries.
If you do not specify a model with the --model option, LLM-CLI will automatically default to the model "gpt-4o-mini". The CLI will check the following models in order of preference:
- The model provided via the
--modeloption. - Default model:
"gpt-4o-mini"
To set up a local development environment:
git clone https://github.com/your-repo/llm-cli.git
cd llm-cli
poetry installTo run the CLI in development mode:
poetry run llm -q Test queryContributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License. See the LICENSE file for details.