Added customizable LLM API providers, system prompt, reasoning mode, display full response, and customization of the text editor#4
Conversation
WalkthroughThe plugin is extended to support multiple OpenAI-compatible API providers beyond OpenRouter. Configuration adds customizable API URL, system prompt, reasoning mode toggle, and text editor selection. Core logic updated to use configurable provider URL, include system prompts in model messages, and pass reasoning flags in API payloads. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant FlowLauncher
participant SettingsTemplate as Settings
participant main.py as Plugin Logic
participant APIProvider as API Provider
participant TextEditor
User->>FlowLauncher: Query AI
FlowLauncher->>main.py: Process Query
main.py->>SettingsTemplate: Load Configuration
SettingsTemplate-->>main.py: api_url, api_key, system_prompt,<br/>reasoning, full_response, text_editor
rect rgb(200, 220, 240)
Note over main.py: Construct Model Messages
main.py->>main.py: Build message with system_prompt
end
main.py->>APIProvider: POST with reasoning flag
Note over APIProvider: configurable_url/chat/completions
APIProvider-->>main.py: Response (answer)
rect rgb(220, 240, 200)
Note over main.py: Process Response
main.py->>main.py: Normalize whitespace,<br/>derive subtitle from full_response
end
alt Show Full Response
main.py-->>FlowLauncher: Result with full answer
else Abbreviated
main.py-->>FlowLauncher: Result with trimmed answer
end
User->>FlowLauncher: Select "Open in notepad" Action
FlowLauncher->>main.py: open_in_notepad(text, text_editor)
main.py->>TextEditor: Launch with configured editor
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Multiple interdependent changes across settings, API logic, and function signatures. Main complexity stems from API flow modifications (message construction with system prompts, reasoning flag integration, configurable provider URLs), response handling refactoring, and signature updates to Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🧰 Additional context used🪛 LanguageToolreadme.md[grammar] ~6-~6: There might be a mistake here. (QB_NEW_EN) [grammar] ~16-~16: There might be a mistake here. (QB_NEW_EN) [grammar] ~17-~17: There might be a mistake here. (QB_NEW_EN) [grammar] ~22-~22: There might be a mistake here. (QB_NEW_EN) [grammar] ~23-~23: There might be a mistake here. (QB_NEW_EN) [grammar] ~24-~24: There might be a mistake here. (QB_NEW_EN) [grammar] ~25-~25: There might be a mistake here. (QB_NEW_EN) [grammar] ~30-~30: There might be a mistake here. (QB_NEW_EN) [grammar] ~32-~32: There might be a mistake here. (QB_NEW_EN) 🔇 Additional comments (1)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
readme.md (3)
34-36: Update environment variable name.The documentation still references
OPENROUTER_API_KEY, but the code now usesFLOWLLM_API_KEY(see main.py line 37). This inconsistency will confuse users.Apply this diff to update the environment variable name:
-1. Create an environment variable named `OPENROUTER_API_KEY` with your API key from [OpenRouter](https://openrouter.ai/keys) +1. Create an environment variable named `FLOWLLM_API_KEY` with your API key from your chosen provider (e.g., [OpenRouter](https://openrouter.ai/keys))
40-40: Update default model reference.The documentation shows the old default model (
deepseek/deepseek-chat:free), but it was changed tomistralai/mistral-small-3.2-24b-instruct:freein SettingsTemplate.yaml and main.py.Apply this diff to update the default model:
-- `default_model`: The AI model to use (default: "deepseek/deepseek-chat:free") +- `default_model`: The AI model to use (default: "mistralai/mistral-small-3.2-24b-instruct:free")
70-70: Update troubleshooting section.The troubleshooting section still references
OPENROUTER_API_KEY, which should be updated toFLOWLLM_API_KEYto match the code changes.Apply this diff:
-- **API Key not set**: Make sure the `OPENROUTER_API_KEY` environment variable is set correctly +- **API Key not set**: Make sure the `FLOWLLM_API_KEY` environment variable is set correctlymain.py (1)
107-107: Update error message to match new environment variable name.The error message still references
OPENROUTER_API_KEY, but the code now reads fromFLOWLLM_API_KEY(line 37). This inconsistency will confuse users during troubleshooting.Apply this diff to update the error message:
- SubTitle="Set OPENROUTER_API_KEY environment variable", + SubTitle="Set FLOWLLM_API_KEY environment variable or configure API key in settings",
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
images/app.pngis excluded by!**/*.png
📒 Files selected for processing (4)
SettingsTemplate.yaml(1 hunks)main.py(9 hunks)plugin.json(1 hunks)readme.md(3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
main.py (2)
lib/pyflowlauncher/settings.py (1)
settings(5-7)lib/pyflowlauncher/plugin.py (2)
settings(65-69)run(71-82)
🪛 LanguageTool
readme.md
[grammar] ~4-~4: There might be a mistake here.
Context: ...irectly from your launcher. ## Features - Query AI models directly from Flow Launc...
(QB_NEW_EN)
🪛 Ruff (0.14.0)
main.py
121-121: Probable use of requests call without timeout
(S113)
🔇 Additional comments (4)
plugin.json (1)
6-7: LGTM!The metadata updates appropriately reflect the collaborative contribution and version progression.
SettingsTemplate.yaml (1)
3-6: LGTM!The configuration additions properly support the new provider-agnostic functionality, with appropriate defaults and clear field labels.
Also applies to: 8-8, 13-14, 21-21, 23-28
readme.md (1)
1-3: LGTM!The documentation updates properly reflect the provider-agnostic functionality and new features.
Also applies to: 9-11, 28-28
main.py (1)
19-22: LGTM!The implementation properly integrates configurable provider URL and system prompt functionality. The message construction logic correctly handles the optional system prompt, and the settings retrieval includes appropriate fallbacks.
Also applies to: 37-37, 82-88, 112-117, 122-122, 126-127
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
readme.md (1)
72-72: Update the environment variable name in troubleshooting.The troubleshooting section still references the old
OPENROUTER_API_KEYenvironment variable, which is inconsistent with the updated variable nameFLOWLLM_API_KEYmentioned in line 37.Apply this diff to maintain consistency:
-- **API Key not set**: Make sure the `OPENROUTER_API_KEY` environment variable is set correctly +- **API Key not set**: Make sure the `FLOWLLM_API_KEY` environment variable is set correctly
♻️ Duplicate comments (2)
SettingsTemplate.yaml (1)
7-7: Fix the field description.The description "Select the AI model to use for queries" is incorrect for an API URL field. It appears to be copied from the default_model field description.
Apply this diff to correct the description:
- description: "Select the AI model to use for queries" + description: "The API endpoint URL for your LLM provider (supports OpenAI-compatible APIs)"main.py (1)
128-143: Add timeout to prevent indefinite hangs.The HTTP request lacks a timeout parameter, which can cause the plugin to hang indefinitely if the API provider is unresponsive.
Apply this diff to add a timeout:
response = requests.post( api_provider_url, headers={ "Authorization": f"Bearer {api_key}", "Content-Type": "application/json", "HTTP-Referer": "https://github.com/xenongee/Flow.Launcher.Plugin.AI-Assistant", "X-Title": "AI Assistant - Flow Launcher Plugin" }, json={ "model": default_model, "messages": model_messages, "reasoning": { "enabled": reasoning } - } + }, + timeout=30 )
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
images/copy.pngis excluded by!**/*.pngimages/note.pngis excluded by!**/*.png
📒 Files selected for processing (3)
SettingsTemplate.yaml(1 hunks)main.py(7 hunks)readme.md(3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
main.py (3)
lib/pyflowlauncher/plugin.py (4)
Plugin(20-102)settings(65-69)on_method(37-42)run(71-82)lib/pyflowlauncher/result.py (4)
Result(40-65)send_results(73-75)ResultResponse(68-70)JsonRPCAction(18-22)lib/pyflowlauncher/settings.py (1)
settings(5-7)
🪛 LanguageTool
readme.md
[grammar] ~1-~1: There might be a mistake here.
Context: # Flow.Launcher.Plugin.AI-Assistant The Flow Launcher plugin connects to the...
(QB_NEW_EN)
[grammar] ~4-~4: There might be a mistake here.
Context: ...irectly from your launcher. ## Features - Query AI models directly from Flow Launc...
(QB_NEW_EN)
[grammar] ~31-~31: There might be a mistake here.
Context: ...nt/releases) 2. Extract the zip file to %APPDATA%\FlowLauncher\Plugins 3. Restart Flow Launcher ## Configuration ...
(QB_NEW_EN)
🪛 Ruff (0.14.0)
main.py
128-128: Probable use of requests call without timeout
(S113)
250-250: subprocess call: check for execution of untrusted input
(S603)
251-251: Do not catch blind exception: Exception
(BLE001)
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (2)
SettingsTemplate.yaml (1)
3-8: LGTM! Consider adding endpoint clarity.The description has been corrected from the previous review. The current description accurately identifies this as an API provider URL rather than a model selection field.
Optionally, you could make the description slightly more specific by mentioning this is the full API endpoint path:
- description: "URL of the OpenAI-compatible API provider" + description: "Full API endpoint URL for your OpenAI-compatible LLM provider"main.py (1)
18-22: Consider refining the system prompt.The constants are well-structured and align with the settings template. However, the DEFAULT_SYSTEM_PROMPT contains a specific instruction about terminal commands ("If a user asks a question that can be answered in the terminal, only provide commands") that may be too prescriptive for a general-purpose assistant.
Consider simplifying the system prompt or making the terminal command instruction optional:
-DEFAULT_SYSTEM_PROMPT = "You are an assistant providing concise, factually accurate responses with no formatting. Reply only with a plain-text answer: no markdown, lists, explanations, or extra text. Prioritize brevity and precise truth above all else. If a user asks a question that can be answered in the terminal, only provide commands." +DEFAULT_SYSTEM_PROMPT = "You are an assistant providing concise, factually accurate responses with no formatting. Reply only with a plain-text answer: no markdown, lists, explanations, or extra text. Prioritize brevity and precise truth above all else."Note: The raw string prefix
ron line 22 is unnecessary for"notepad.exe"as it contains no backslashes, but it's harmless.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
SettingsTemplate.yaml(1 hunks)main.py(7 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
main.py (3)
lib/pyflowlauncher/plugin.py (4)
Plugin(20-102)settings(65-69)on_method(37-42)run(71-82)lib/pyflowlauncher/result.py (4)
Result(40-65)send_results(73-75)ResultResponse(68-70)JsonRPCAction(18-22)lib/pyflowlauncher/settings.py (1)
settings(5-7)
🪛 Ruff (0.14.0)
main.py
256-256: subprocess call: check for execution of untrusted input
(S603)
257-257: Do not catch blind exception: Exception
(BLE001)
🔇 Additional comments (12)
SettingsTemplate.yaml (6)
21-21: LGTM!The default model has been updated and is consistent with the DEFAULT_MODEL constant in main.py.
23-28: LGTM!The system prompt field provides good customization options with a sensible default that emphasizes concise, factual responses. This aligns well with the plugin's purpose as a quick-answer tool.
30-35: LGTM!The reasoning toggle provides users with control over whether to enable reasoning mode in their LLM queries.
42-42: LGTM!The delimiter default is consistent with the DEFAULT_DELIMITER constant in main.py.
44-49: LGTM!The full response toggle gives users control over whether to see the complete AI response or just a preview in the subtitle, providing good UX flexibility.
51-56: LGTM!The text editor field allows users to customize their preferred editor with a sensible default. Validation of the editor path is handled in main.py.
main.py (6)
10-10: LGTM!The
remodule import is necessary for the whitespace normalization on line 158.
88-95: LGTM!The new settings are retrieved consistently with appropriate defaults that match the defined constants and settings template.
119-124: LGTM!The message construction correctly builds the messages array with an optional system prompt followed by the user message, conforming to the OpenAI chat completion format.
153-180: LGTM!The response handling improvements provide better UX:
- Configurable full response vs preview display
- Model name in the title for transparency
- Dynamic text editor name in the UI
- Proper context data passing for menu actions
223-223: LGTM!More appropriate icon for the copy action.
262-290: LGTM!The context menu properly handles the new data structure while maintaining backward compatibility with string data. The dynamic text editor display and updated icons improve the user experience.
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
readme.md (1)
42-44: Update defaults and list all new settings.Docs say default_model is “deepseek/deepseek-chat:free” but code default is “mistralai/mistral-small-3.2-24b-instruct:free”. Also, new settings (api_url, system_prompt, reasoning, full_response, text_editor) are missing.
The plugin supports the following settings: -- `default_model`: The AI model to use (default: "deepseek/deepseek-chat:free") +- `default_model`: The AI model to use (default: "mistralai/mistral-small-3.2-24b-instruct:free") - `delimiter`: Symbol that indicates when to send a prompt (default: "||") +- `api_url`: OpenAI‑compatible chat/completions endpoint (default: https://openrouter.ai/api/v1/chat/completions) +- `system_prompt`: Default system prompt injected into requests (default: concise, plain‑text answers) +- `reasoning`: Enable/disable reasoning mode (provider‑specific; supported on OpenRouter) (default: True) +- `full_response`: Show full response in the subtitle vs a preview (default: True) +- `text_editor`: Editor used to open responses (default: notepad.exe)
♻️ Duplicate comments (3)
main.py (3)
58-66: Fix falsy-handling bug in get_settings (breaks boolean prefs).Explicit False/0/"" are being replaced by defaults. Preserve falsy values; only fall back when None/missing.
- value = _settings_cache.get(key, default) - - if not value: - value = default + value = _settings_cache.get(key) + if value is None: + value = default return value
126-144: Gate the provider-specific “reasoning” field; otherwise non‑OpenRouter endpoints may error.Add the field only for OpenRouter (or map per provider). Keeps requests OpenAI‑compatible.
- response = requests.post( - api_provider_url, - headers={ + # Build payload provider-safely + payload = { + "model": default_model, + "messages": model_messages, + } + if "openrouter" in api_provider_url.lower(): + payload["reasoning"] = {"enabled": reasoning} + + response = requests.post( + api_provider_url, + headers={ "Authorization": f"Bearer {api_key}", "Content-Type": "application/json", "HTTP-Referer": "https://github.com/xenongee/Flow.Launcher.Plugin.AI-Assistant", "X-Title": "AI Assistant - Flow Launcher Plugin" }, - json={ - "model": default_model, - "messages": model_messages, - "reasoning": { - "enabled": reasoning - } - }, + json=payload, timeout=30 )OpenRouter reasoning param: confirm current JSON schema and whether OpenAI chat/completions accepts/ignores unknown top-level fields like "reasoning".
241-263: Harden editor validation and avoid blind catch in open_in_notepad.Add minimal extension check on Windows and prefer OS-specific exceptions; keeps S603/BLE001 linters happy and reduces foot-guns.
@plugin.on_method def open_in_notepad(text: str, text_editor: str) -> None: @@ try: text_editor = os.path.normpath(text_editor) # Validate that the editor executable exists if not os.path.isfile(text_editor): print(f"Error: Text editor not found: {text_editor}") return + # Windows: ensure we're launching an executable + if sys.platform == "win32" and not text_editor.lower().endswith(".exe"): + print(f"Error: Text editor must be a .exe on Windows: {text_editor}") + return # Create a temporary file with the text content fd, path = tempfile.mkstemp(suffix=".txt", prefix="ai_response_") with os.fdopen(fd, 'w', encoding='utf-8') as f: f.write(text) # Open the file with notepad using subprocess subprocess.Popen([text_editor, path]) - except Exception as e: - print(f"Error opening notepad: {e}") + except OSError as e: + print(f"Error opening editor: {e}") + except Exception as e: + print(f"Unexpected error opening editor: {e}")Optional: whitelist known editors by basename (e.g., notepad.exe, code.exe, notepad++.exe) for stricter control.
🧹 Nitpick comments (8)
main.py (3)
193-200: Narrow exception handling for HTTP and JSON errors.Improve error clarity; avoid blind catch.
- except Exception as e: - return send_results([ - Result( - Title="Error", - SubTitle=str(e), - IcoPath="Images/app.png" - ) - ]) + except requests.exceptions.RequestException as e: + return send_results([Result(Title="Network error", SubTitle=str(e), IcoPath="Images/app.png")]) + except (ValueError, KeyError) as e: + return send_results([Result(Title="Response parsing error", SubTitle=str(e), IcoPath="Images/app.png")]) + except Exception as e: + return send_results([Result(Title="Unexpected error", SubTitle=str(e), IcoPath="Images/app.png")])
169-175: Standardize ContextData shape.Sometimes a list, sometimes a string. Prefer a consistent type (e.g., list [answer, editor]) to simplify consumers.
Also applies to: 269-275
133-135: Provider-specific headers.“HTTP-Referer”/“X-Title” are OpenRouter niceties. Consider sending them only when using OpenRouter to keep requests minimal elsewhere.
readme.md (5)
8-8: Generalize editor wording.Now configurable. Suggest: “Results can be copied to clipboard or opened in your text editor (default: Notepad).”
-- Results can be copied to clipboard or opened in Notepad +- Results can be copied to clipboard or opened in your text editor (default: Notepad)
36-38: Provider-agnostic API key wording.Avoid implying it must be an “OpenRouter” key.
-For security, your OpenRouter API key should be set as an environment variable: -1. Create an environment variable named `FLOWLLM_API_KEY` with your API key from [OpenRouter](https://openrouter.ai/keys) or another OpenAI-compatible provider. +For security, set your provider API key as an environment variable: +1. Create an environment variable named `FLOWLLM_API_KEY` with your API key (e.g., from [OpenRouter](https://openrouter.ai/keys) or any OpenAI‑compatible provider).
55-58: Reflect editor configurability in usage.These bullets still say “Notepad”.
- - **Open in Notepad**: Click to open the response in Notepad for viewing or editing + - **Open in Editor**: Click to open the response in your chosen editor for viewing or editing
61-65: Optional: neutralize image caption.If you keep images, make the caption editor‑agnostic.
- 3. If you choose to open the response in Notepad: + 3. If you choose to open the response in your editor: -  + 
71-73: Troubleshooting entry is good; consider adding a note on provider compatibility for “reasoning.”E.g., “Reasoning mode is provider-specific; if requests fail on non‑OpenRouter providers, disable it in Settings.”
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
main.py(7 hunks)readme.md(4 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
main.py (3)
lib/pyflowlauncher/plugin.py (4)
Plugin(20-102)settings(65-69)on_method(37-42)run(71-82)lib/pyflowlauncher/result.py (4)
Result(40-65)send_results(73-75)ResultResponse(68-70)JsonRPCAction(18-22)lib/pyflowlauncher/settings.py (1)
settings(5-7)
🪛 LanguageTool
readme.md
[grammar] ~1-~1: There might be a mistake here.
Context: # Flow.Launcher.Plugin.AI-Assistant The Flow Launcher plugin connects to the...
(QB_NEW_EN)
[grammar] ~4-~4: There might be a mistake here.
Context: ...irectly from your launcher. ## Features - Query AI models directly from Flow Launc...
(QB_NEW_EN)
[grammar] ~31-~31: There might be a mistake here.
Context: ...nt/releases) 2. Extract the zip file to %APPDATA%\FlowLauncher\Plugins 3. Restart Flow Launcher ## Configuration ...
(QB_NEW_EN)
🪛 Ruff (0.14.0)
main.py
261-261: subprocess call: check for execution of untrusted input
(S603)
262-262: Do not catch blind exception: Exception
(BLE001)
🔇 Additional comments (1)
main.py (1)
37-37: Env var rename verified consistently applied repo-wide.The search confirmed FLOWLLM_API_KEY is used consistently across main.py and readme.md with no lingering OPENROUTER_API_KEY references. Documentation and code are aligned.
This feature adds support for connecting to any OpenAI-compatible API providers (not just OpenRouter) and allows configuring the system prompt for queries.
api_urlfield in settings to specify custom API provider URL (defaults to OpenRouter).system_promptfield for configuring the system prompt (with default value for concise responses).reasoning settingto enable/disable reasoning mode in API requests.full_responsesetting to control whether to display full response or preview in subtitle.text_editorsetting to allow customization of the text editor for opening responses.main.pyto use custom URL and add system prompt to messages, and handle new settings.OPENROUTER_API_KEYtoFLOWLLM_API_KEYfor universality.Summary by CodeRabbit
New Features
Documentation