Skip to content

Added customizable LLM API providers, system prompt, reasoning mode, display full response, and customization of the text editor#4

Open
xenongee wants to merge 13 commits intoShamanicArts:mainfrom
xenongee:feature/custom-llm-api
Open

Added customizable LLM API providers, system prompt, reasoning mode, display full response, and customization of the text editor#4
xenongee wants to merge 13 commits intoShamanicArts:mainfrom
xenongee:feature/custom-llm-api

Conversation

@xenongee
Copy link
Copy Markdown

@xenongee xenongee commented Oct 14, 2025

This feature adds support for connecting to any OpenAI-compatible API providers (not just OpenRouter) and allows configuring the system prompt for queries.

  • Added api_url field in settings to specify custom API provider URL (defaults to OpenRouter).
  • Added system_prompt field for configuring the system prompt (with default value for concise responses).
  • Added reasoning setting to enable/disable reasoning mode in API requests.
  • Added full_response setting to control whether to display full response or preview in subtitle.
  • Added text_editor setting to allow customization of the text editor for opening responses.
  • Updated logic in main.py to use custom URL and add system prompt to messages, and handle new settings.
  • Renamed environment variable from OPENROUTER_API_KEY to FLOWLLM_API_KEY for universality.
  • Improved settings handling with better fallback logic and removed redundant code
  • Updated UI elements: changed icons, improved result titles with model names, enhanced context menu
  • Updated metadata in plugin.json (authors, version) and readme.md (description, instructions, features list)

Summary by CodeRabbit

  • New Features

    • Added support for any OpenAI-compatible API provider with configurable URL.
    • Introduced custom system prompt configuration for fine-tuned responses.
    • Added reasoning mode toggle to enable/disable reasoning functionality.
    • Added customizable text editor setting for note-taking.
    • Added option to display full AI response or abbreviated version.
  • Documentation

    • Updated plugin metadata, description, and version.
    • Enhanced configuration guide reflecting new customization options.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Oct 14, 2025

Walkthrough

The plugin is extended to support multiple OpenAI-compatible API providers beyond OpenRouter. Configuration adds customizable API URL, system prompt, reasoning mode toggle, and text editor selection. Core logic updated to use configurable provider URL, include system prompts in model messages, and pass reasoning flags in API payloads.

Changes

Cohort / File(s) Summary
Settings Expansion
SettingsTemplate.yaml
Added api_url, system_prompt, reasoning, full_response, and text_editor configuration fields; updated api_key label to reference "LLM API Provider"; changed default model from deepseek to mistralai.
Core Logic
main.py
Introduced DEFAULT_LLM_PROVIDER_URL, DEFAULT_SYSTEM_PROMPT, DEFAULT_TEXT_EDITOR constants; replaced OPENROUTER_API_KEY with FLOWLLM_API_KEY; refactored query flow to construct system-prompted messages, include reasoning flag in payload, use configurable API provider URL; updated open_in_notepad() signature to accept text_editor parameter; enhanced response handling to support full response display and notepad editor selection.
Metadata & Documentation
plugin.json, readme.md
Updated plugin version to 0.0.6, description to reference "OpenAI-compatible providers," and added co-author "xenongee"; updated README to reflect fork branding as "AI-Assistant-Enhanced," deprecate old installation methods, rename environment variable to FLOWLLM_API_KEY, and document new customization features.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant FlowLauncher
    participant SettingsTemplate as Settings
    participant main.py as Plugin Logic
    participant APIProvider as API Provider
    participant TextEditor

    User->>FlowLauncher: Query AI
    FlowLauncher->>main.py: Process Query
    main.py->>SettingsTemplate: Load Configuration
    SettingsTemplate-->>main.py: api_url, api_key, system_prompt,<br/>reasoning, full_response, text_editor
    
    rect rgb(200, 220, 240)
    Note over main.py: Construct Model Messages
    main.py->>main.py: Build message with system_prompt
    end
    
    main.py->>APIProvider: POST with reasoning flag
    Note over APIProvider: configurable_url/chat/completions
    APIProvider-->>main.py: Response (answer)
    
    rect rgb(220, 240, 200)
    Note over main.py: Process Response
    main.py->>main.py: Normalize whitespace,<br/>derive subtitle from full_response
    end
    
    alt Show Full Response
        main.py-->>FlowLauncher: Result with full answer
    else Abbreviated
        main.py-->>FlowLauncher: Result with trimmed answer
    end
    
    User->>FlowLauncher: Select "Open in notepad" Action
    FlowLauncher->>main.py: open_in_notepad(text, text_editor)
    main.py->>TextEditor: Launch with configured editor
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Multiple interdependent changes across settings, API logic, and function signatures. Main complexity stems from API flow modifications (message construction with system prompts, reasoning flag integration, configurable provider URLs), response handling refactoring, and signature updates to open_in_notepad(). While changes follow a coherent theme (provider flexibility), they span distinct logical areas requiring separate verification of correctness and integration.

Poem

🐰 A rabbit hops through clouds of code,

With URLs now versatile and broad!

System prompts whisper, reasoning glows,

From one provider to wherever flows,

Custom editors? Oh my, how nice—

This plugin's enhanced, through and through! ✨

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The PR title "Added customizable LLM API providers, system prompt, reasoning mode, display full response, and customization of the text editor" directly aligns with the main changes in the changeset. It accurately references all five primary feature additions: the api_url setting for custom providers, system_prompt setting, reasoning setting, full_response setting, and text_editor setting. The title is specific and descriptive, avoiding vague terminology, and each item mentioned in the title is present and substantive in the actual changes. The compound nature of the title is appropriate given that these features are interconnected parts of the core enhancement to make the plugin more flexible and user-configurable.
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.
✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 42e6968 and 80d2f78.

📒 Files selected for processing (1)
  • readme.md (3 hunks)
🧰 Additional context used
🪛 LanguageTool
readme.md

[grammar] ~6-~6: There might be a mistake here.
Context: ...irectly from your launcher. ## Features - Query AI models directly from Flow Launc...

(QB_NEW_EN)


[grammar] ~16-~16: There might be a mistake here.
Context: .../disable reasoning mode ## Installation ~~### Method 1: Via Flow Launcher (Depre...

(QB_NEW_EN)


[grammar] ~17-~17: There might be a mistake here.
Context: ...ow Launcher (Deprecated for this fork)~~ 1. Open Flow Launcher 2. ~~Type the fol...

(QB_NEW_EN)


[grammar] ~22-~22: There might be a mistake here.
Context: ...lugin Store (Deprecated for this fork)~~ 1. Open Flow Launcher Settings 2. ~~Nav...

(QB_NEW_EN)


[grammar] ~23-~23: There might be a mistake here.
Context: ...rk)~~ 1. Open Flow Launcher Settings 2. Navigate to the "Plugin Store" tab 3...

(QB_NEW_EN)


[grammar] ~24-~24: There might be a mistake here.
Context: .... Navigate to the "Plugin Store" tab 3. Search for "AI Assistant" 4. ~~Click...

(QB_NEW_EN)


[grammar] ~25-~25: There might be a mistake here.
Context: ..." tab~~ 3. Search for "AI Assistant" 4. Click "Install" Note: These met...

(QB_NEW_EN)


[grammar] ~30-~30: There might be a mistake here.
Context: ... way. ### Method 3: Manual Installation 1. Download the latest release from the [Re...

(QB_NEW_EN)


[grammar] ~32-~32: There might be a mistake here.
Context: ...ed/releases) 2. Extract the zip file to %APPDATA%\FlowLauncher\Plugins 3. Restart Flow Launcher ## Configuration ...

(QB_NEW_EN)

🔇 Additional comments (1)
readme.md (1)

56-58: Documentation hardcodes "Notepad" but text editor is now customizable.

Line 58 states "Click to open the response in Notepad" but the PR adds a text_editor setting to allow selecting different editors. This is misleading to users who have configured an alternative editor.

Revise to reflect the configurable editor:

-   - **Open in Notepad**: Click to open the response in Notepad for viewing or editing
+   - **Open in Editor**: Click to open the response in your configured text editor for viewing or editing

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
readme.md (3)

34-36: Update environment variable name.

The documentation still references OPENROUTER_API_KEY, but the code now uses FLOWLLM_API_KEY (see main.py line 37). This inconsistency will confuse users.

Apply this diff to update the environment variable name:

-1. Create an environment variable named `OPENROUTER_API_KEY` with your API key from [OpenRouter](https://openrouter.ai/keys)
+1. Create an environment variable named `FLOWLLM_API_KEY` with your API key from your chosen provider (e.g., [OpenRouter](https://openrouter.ai/keys))

40-40: Update default model reference.

The documentation shows the old default model (deepseek/deepseek-chat:free), but it was changed to mistralai/mistral-small-3.2-24b-instruct:free in SettingsTemplate.yaml and main.py.

Apply this diff to update the default model:

-- `default_model`: The AI model to use (default: "deepseek/deepseek-chat:free")
+- `default_model`: The AI model to use (default: "mistralai/mistral-small-3.2-24b-instruct:free")

70-70: Update troubleshooting section.

The troubleshooting section still references OPENROUTER_API_KEY, which should be updated to FLOWLLM_API_KEY to match the code changes.

Apply this diff:

-- **API Key not set**: Make sure the `OPENROUTER_API_KEY` environment variable is set correctly
+- **API Key not set**: Make sure the `FLOWLLM_API_KEY` environment variable is set correctly
main.py (1)

107-107: Update error message to match new environment variable name.

The error message still references OPENROUTER_API_KEY, but the code now reads from FLOWLLM_API_KEY (line 37). This inconsistency will confuse users during troubleshooting.

Apply this diff to update the error message:

-                    SubTitle="Set OPENROUTER_API_KEY environment variable",
+                    SubTitle="Set FLOWLLM_API_KEY environment variable or configure API key in settings",
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5176c31 and 1c33ea8.

⛔ Files ignored due to path filters (1)
  • images/app.png is excluded by !**/*.png
📒 Files selected for processing (4)
  • SettingsTemplate.yaml (1 hunks)
  • main.py (9 hunks)
  • plugin.json (1 hunks)
  • readme.md (3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
main.py (2)
lib/pyflowlauncher/settings.py (1)
  • settings (5-7)
lib/pyflowlauncher/plugin.py (2)
  • settings (65-69)
  • run (71-82)
🪛 LanguageTool
readme.md

[grammar] ~4-~4: There might be a mistake here.
Context: ...irectly from your launcher. ## Features - Query AI models directly from Flow Launc...

(QB_NEW_EN)

🪛 Ruff (0.14.0)
main.py

121-121: Probable use of requests call without timeout

(S113)

🔇 Additional comments (4)
plugin.json (1)

6-7: LGTM!

The metadata updates appropriately reflect the collaborative contribution and version progression.

SettingsTemplate.yaml (1)

3-6: LGTM!

The configuration additions properly support the new provider-agnostic functionality, with appropriate defaults and clear field labels.

Also applies to: 8-8, 13-14, 21-21, 23-28

readme.md (1)

1-3: LGTM!

The documentation updates properly reflect the provider-agnostic functionality and new features.

Also applies to: 9-11, 28-28

main.py (1)

19-22: LGTM!

The implementation properly integrates configurable provider URL and system prompt functionality. The message construction logic correctly handles the optional system prompt, and the settings retrieval includes appropriate fallbacks.

Also applies to: 37-37, 82-88, 112-117, 122-122, 126-127

Comment thread SettingsTemplate.yaml Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
readme.md (1)

72-72: Update the environment variable name in troubleshooting.

The troubleshooting section still references the old OPENROUTER_API_KEY environment variable, which is inconsistent with the updated variable name FLOWLLM_API_KEY mentioned in line 37.

Apply this diff to maintain consistency:

-- **API Key not set**: Make sure the `OPENROUTER_API_KEY` environment variable is set correctly
+- **API Key not set**: Make sure the `FLOWLLM_API_KEY` environment variable is set correctly
♻️ Duplicate comments (2)
SettingsTemplate.yaml (1)

7-7: Fix the field description.

The description "Select the AI model to use for queries" is incorrect for an API URL field. It appears to be copied from the default_model field description.

Apply this diff to correct the description:

-          description: "Select the AI model to use for queries"
+          description: "The API endpoint URL for your LLM provider (supports OpenAI-compatible APIs)"
main.py (1)

128-143: Add timeout to prevent indefinite hangs.

The HTTP request lacks a timeout parameter, which can cause the plugin to hang indefinitely if the API provider is unresponsive.

Apply this diff to add a timeout:

             response = requests.post(
                 api_provider_url,
                 headers={
                     "Authorization": f"Bearer {api_key}",
                     "Content-Type": "application/json",
                     "HTTP-Referer": "https://github.com/xenongee/Flow.Launcher.Plugin.AI-Assistant",
                     "X-Title": "AI Assistant - Flow Launcher Plugin"
                 },
                 json={
                     "model": default_model,
                     "messages": model_messages,
                     "reasoning": {
                         "enabled": reasoning
                     }
-                }
+                },
+                timeout=30
             )
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1c33ea8 and 705dcd5.

⛔ Files ignored due to path filters (2)
  • images/copy.png is excluded by !**/*.png
  • images/note.png is excluded by !**/*.png
📒 Files selected for processing (3)
  • SettingsTemplate.yaml (1 hunks)
  • main.py (7 hunks)
  • readme.md (3 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
main.py (3)
lib/pyflowlauncher/plugin.py (4)
  • Plugin (20-102)
  • settings (65-69)
  • on_method (37-42)
  • run (71-82)
lib/pyflowlauncher/result.py (4)
  • Result (40-65)
  • send_results (73-75)
  • ResultResponse (68-70)
  • JsonRPCAction (18-22)
lib/pyflowlauncher/settings.py (1)
  • settings (5-7)
🪛 LanguageTool
readme.md

[grammar] ~1-~1: There might be a mistake here.
Context: # Flow.Launcher.Plugin.AI-Assistant The Flow Launcher plugin connects to the...

(QB_NEW_EN)


[grammar] ~4-~4: There might be a mistake here.
Context: ...irectly from your launcher. ## Features - Query AI models directly from Flow Launc...

(QB_NEW_EN)


[grammar] ~31-~31: There might be a mistake here.
Context: ...nt/releases) 2. Extract the zip file to %APPDATA%\FlowLauncher\Plugins 3. Restart Flow Launcher ## Configuration ...

(QB_NEW_EN)

🪛 Ruff (0.14.0)
main.py

128-128: Probable use of requests call without timeout

(S113)


250-250: subprocess call: check for execution of untrusted input

(S603)


251-251: Do not catch blind exception: Exception

(BLE001)

@xenongee xenongee changed the title Added support for customizable LLM API providers and system prompt configuration Added customizable LLM API providers, system prompt, reasoning mode, display full response, and customization of the text editor Oct 16, 2025
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Nitpick comments (2)
SettingsTemplate.yaml (1)

3-8: LGTM! Consider adding endpoint clarity.

The description has been corrected from the previous review. The current description accurately identifies this as an API provider URL rather than a model selection field.

Optionally, you could make the description slightly more specific by mentioning this is the full API endpoint path:

-          description: "URL of the OpenAI-compatible API provider"
+          description: "Full API endpoint URL for your OpenAI-compatible LLM provider"
main.py (1)

18-22: Consider refining the system prompt.

The constants are well-structured and align with the settings template. However, the DEFAULT_SYSTEM_PROMPT contains a specific instruction about terminal commands ("If a user asks a question that can be answered in the terminal, only provide commands") that may be too prescriptive for a general-purpose assistant.

Consider simplifying the system prompt or making the terminal command instruction optional:

-DEFAULT_SYSTEM_PROMPT = "You are an assistant providing concise, factually accurate responses with no formatting. Reply only with a plain-text answer: no markdown, lists, explanations, or extra text. Prioritize brevity and precise truth above all else. If a user asks a question that can be answered in the terminal, only provide commands."
+DEFAULT_SYSTEM_PROMPT = "You are an assistant providing concise, factually accurate responses with no formatting. Reply only with a plain-text answer: no markdown, lists, explanations, or extra text. Prioritize brevity and precise truth above all else."

Note: The raw string prefix r on line 22 is unnecessary for "notepad.exe" as it contains no backslashes, but it's harmless.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ebf704e and 53638d5.

📒 Files selected for processing (2)
  • SettingsTemplate.yaml (1 hunks)
  • main.py (7 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
main.py (3)
lib/pyflowlauncher/plugin.py (4)
  • Plugin (20-102)
  • settings (65-69)
  • on_method (37-42)
  • run (71-82)
lib/pyflowlauncher/result.py (4)
  • Result (40-65)
  • send_results (73-75)
  • ResultResponse (68-70)
  • JsonRPCAction (18-22)
lib/pyflowlauncher/settings.py (1)
  • settings (5-7)
🪛 Ruff (0.14.0)
main.py

256-256: subprocess call: check for execution of untrusted input

(S603)


257-257: Do not catch blind exception: Exception

(BLE001)

🔇 Additional comments (12)
SettingsTemplate.yaml (6)

21-21: LGTM!

The default model has been updated and is consistent with the DEFAULT_MODEL constant in main.py.


23-28: LGTM!

The system prompt field provides good customization options with a sensible default that emphasizes concise, factual responses. This aligns well with the plugin's purpose as a quick-answer tool.


30-35: LGTM!

The reasoning toggle provides users with control over whether to enable reasoning mode in their LLM queries.


42-42: LGTM!

The delimiter default is consistent with the DEFAULT_DELIMITER constant in main.py.


44-49: LGTM!

The full response toggle gives users control over whether to see the complete AI response or just a preview in the subtitle, providing good UX flexibility.


51-56: LGTM!

The text editor field allows users to customize their preferred editor with a sensible default. Validation of the editor path is handled in main.py.

main.py (6)

10-10: LGTM!

The re module import is necessary for the whitespace normalization on line 158.


88-95: LGTM!

The new settings are retrieved consistently with appropriate defaults that match the defined constants and settings template.


119-124: LGTM!

The message construction correctly builds the messages array with an optional system prompt followed by the user message, conforming to the OpenAI chat completion format.


153-180: LGTM!

The response handling improvements provide better UX:

  • Configurable full response vs preview display
  • Model name in the title for transparency
  • Dynamic text editor name in the UI
  • Proper context data passing for menu actions

223-223: LGTM!

More appropriate icon for the copy action.


262-290: LGTM!

The context menu properly handles the new data structure while maintaining backward compatibility with string data. The dynamic text editor display and updated icons improve the user experience.

Comment thread main.py
Comment thread main.py
Comment thread main.py
Comment thread main.py
Comment thread SettingsTemplate.yaml
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
readme.md (1)

42-44: Update defaults and list all new settings.

Docs say default_model is “deepseek/deepseek-chat:free” but code default is “mistralai/mistral-small-3.2-24b-instruct:free”. Also, new settings (api_url, system_prompt, reasoning, full_response, text_editor) are missing.

 The plugin supports the following settings:
-- `default_model`: The AI model to use (default: "deepseek/deepseek-chat:free")
+- `default_model`: The AI model to use (default: "mistralai/mistral-small-3.2-24b-instruct:free")
 - `delimiter`: Symbol that indicates when to send a prompt (default: "||")
+- `api_url`: OpenAI‑compatible chat/completions endpoint (default: https://openrouter.ai/api/v1/chat/completions)
+- `system_prompt`: Default system prompt injected into requests (default: concise, plain‑text answers)
+- `reasoning`: Enable/disable reasoning mode (provider‑specific; supported on OpenRouter) (default: True)
+- `full_response`: Show full response in the subtitle vs a preview (default: True)
+- `text_editor`: Editor used to open responses (default: notepad.exe)
♻️ Duplicate comments (3)
main.py (3)

58-66: Fix falsy-handling bug in get_settings (breaks boolean prefs).

Explicit False/0/"" are being replaced by defaults. Preserve falsy values; only fall back when None/missing.

-        value = _settings_cache.get(key, default)
-
-        if not value:
-            value = default
+        value = _settings_cache.get(key)
+        if value is None:
+            value = default
         return value

126-144: Gate the provider-specific “reasoning” field; otherwise non‑OpenRouter endpoints may error.

Add the field only for OpenRouter (or map per provider). Keeps requests OpenAI‑compatible.

-            response = requests.post(
-                api_provider_url,
-                headers={
+            # Build payload provider-safely
+            payload = {
+                "model": default_model,
+                "messages": model_messages,
+            }
+            if "openrouter" in api_provider_url.lower():
+                payload["reasoning"] = {"enabled": reasoning}
+
+            response = requests.post(
+                api_provider_url,
+                headers={
                     "Authorization": f"Bearer {api_key}",
                     "Content-Type": "application/json",
                     "HTTP-Referer": "https://github.com/xenongee/Flow.Launcher.Plugin.AI-Assistant",
                     "X-Title": "AI Assistant - Flow Launcher Plugin"
                 },
-                json={
-                    "model": default_model,
-                    "messages": model_messages,
-                    "reasoning": {
-                        "enabled": reasoning
-                    }
-                },
+                json=payload,
                 timeout=30
             )
OpenRouter reasoning param: confirm current JSON schema and whether OpenAI chat/completions accepts/ignores unknown top-level fields like "reasoning".

241-263: Harden editor validation and avoid blind catch in open_in_notepad.

Add minimal extension check on Windows and prefer OS-specific exceptions; keeps S603/BLE001 linters happy and reduces foot-guns.

 @plugin.on_method
 def open_in_notepad(text: str, text_editor: str) -> None:
@@
     try:
         text_editor = os.path.normpath(text_editor)
 
         # Validate that the editor executable exists
         if not os.path.isfile(text_editor):
             print(f"Error: Text editor not found: {text_editor}")
             return
+        # Windows: ensure we're launching an executable
+        if sys.platform == "win32" and not text_editor.lower().endswith(".exe"):
+            print(f"Error: Text editor must be a .exe on Windows: {text_editor}")
+            return
 
         # Create a temporary file with the text content
         fd, path = tempfile.mkstemp(suffix=".txt", prefix="ai_response_")
         with os.fdopen(fd, 'w', encoding='utf-8') as f:
             f.write(text)
 
         # Open the file with notepad using subprocess
         subprocess.Popen([text_editor, path])
-    except Exception as e:
-        print(f"Error opening notepad: {e}")
+    except OSError as e:
+        print(f"Error opening editor: {e}")
+    except Exception as e:
+        print(f"Unexpected error opening editor: {e}")

Optional: whitelist known editors by basename (e.g., notepad.exe, code.exe, notepad++.exe) for stricter control.

🧹 Nitpick comments (8)
main.py (3)

193-200: Narrow exception handling for HTTP and JSON errors.

Improve error clarity; avoid blind catch.

-        except Exception as e:
-            return send_results([
-                Result(
-                    Title="Error",
-                    SubTitle=str(e),
-                    IcoPath="Images/app.png"
-                )
-            ])
+        except requests.exceptions.RequestException as e:
+            return send_results([Result(Title="Network error", SubTitle=str(e), IcoPath="Images/app.png")])
+        except (ValueError, KeyError) as e:
+            return send_results([Result(Title="Response parsing error", SubTitle=str(e), IcoPath="Images/app.png")])
+        except Exception as e:
+            return send_results([Result(Title="Unexpected error", SubTitle=str(e), IcoPath="Images/app.png")])

169-175: Standardize ContextData shape.

Sometimes a list, sometimes a string. Prefer a consistent type (e.g., list [answer, editor]) to simplify consumers.

Also applies to: 269-275


133-135: Provider-specific headers.

“HTTP-Referer”/“X-Title” are OpenRouter niceties. Consider sending them only when using OpenRouter to keep requests minimal elsewhere.

readme.md (5)

8-8: Generalize editor wording.

Now configurable. Suggest: “Results can be copied to clipboard or opened in your text editor (default: Notepad).”

-- Results can be copied to clipboard or opened in Notepad
+- Results can be copied to clipboard or opened in your text editor (default: Notepad)

36-38: Provider-agnostic API key wording.

Avoid implying it must be an “OpenRouter” key.

-For security, your OpenRouter API key should be set as an environment variable:
-1. Create an environment variable named `FLOWLLM_API_KEY` with your API key from [OpenRouter](https://openrouter.ai/keys) or another OpenAI-compatible provider.
+For security, set your provider API key as an environment variable:
+1. Create an environment variable named `FLOWLLM_API_KEY` with your API key (e.g., from [OpenRouter](https://openrouter.ai/keys) or any OpenAI‑compatible provider).

55-58: Reflect editor configurability in usage.

These bullets still say “Notepad”.

-   - **Open in Notepad**: Click to open the response in Notepad for viewing or editing
+   - **Open in Editor**: Click to open the response in your chosen editor for viewing or editing

61-65: Optional: neutralize image caption.

If you keep images, make the caption editor‑agnostic.

- 3. If you choose to open the response in Notepad:
+ 3. If you choose to open the response in your editor:
-    ![Result opened in Notepad](images/ai-devito-text-result.png)
+    ![Result opened in a text editor](images/ai-devito-text-result.png)

71-73: Troubleshooting entry is good; consider adding a note on provider compatibility for “reasoning.”

E.g., “Reasoning mode is provider-specific; if requests fail on non‑OpenRouter providers, disable it in Settings.”

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 53638d5 and 42e6968.

📒 Files selected for processing (2)
  • main.py (7 hunks)
  • readme.md (4 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
main.py (3)
lib/pyflowlauncher/plugin.py (4)
  • Plugin (20-102)
  • settings (65-69)
  • on_method (37-42)
  • run (71-82)
lib/pyflowlauncher/result.py (4)
  • Result (40-65)
  • send_results (73-75)
  • ResultResponse (68-70)
  • JsonRPCAction (18-22)
lib/pyflowlauncher/settings.py (1)
  • settings (5-7)
🪛 LanguageTool
readme.md

[grammar] ~1-~1: There might be a mistake here.
Context: # Flow.Launcher.Plugin.AI-Assistant The Flow Launcher plugin connects to the...

(QB_NEW_EN)


[grammar] ~4-~4: There might be a mistake here.
Context: ...irectly from your launcher. ## Features - Query AI models directly from Flow Launc...

(QB_NEW_EN)


[grammar] ~31-~31: There might be a mistake here.
Context: ...nt/releases) 2. Extract the zip file to %APPDATA%\FlowLauncher\Plugins 3. Restart Flow Launcher ## Configuration ...

(QB_NEW_EN)

🪛 Ruff (0.14.0)
main.py

261-261: subprocess call: check for execution of untrusted input

(S603)


262-262: Do not catch blind exception: Exception

(BLE001)

🔇 Additional comments (1)
main.py (1)

37-37: Env var rename verified consistently applied repo-wide.

The search confirmed FLOWLLM_API_KEY is used consistently across main.py and readme.md with no lingering OPENROUTER_API_KEY references. Documentation and code are aligned.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant