Skip to content

Comments

Refactor LLM Integration to Use securellm_adapter#5

Open
Vicbi wants to merge 14 commits intomainfrom
refactor-securellm
Open

Refactor LLM Integration to Use securellm_adapter#5
Vicbi wants to merge 14 commits intomainfrom
refactor-securellm

Conversation

@Vicbi
Copy link
Contributor

@Vicbi Vicbi commented Feb 2, 2026

Refactor LLM Integration to Use securellm_adapter

⚙️ Release Notes

  • I've adapted the codebase to use securellm_adapter.py instead of direct OpenAI calls. Here's a summary of the changes:

  Files modified

  1. llm_query/securellm_adapter.py

  Added new functions and classes for compatibility:
  - llm_chat() — supports multi-turn conversations with system messages
  - query_llm() — convenience function for simple queries with system message
  - SecureLLMClient — drop-in replacement class for the OpenAI client pattern

  2. llm_query/LLM_analysis.py

  - Removed openai import, added securellm_adapter imports
  - query_openai() now wraps query_llm() from the adapter
  - api_key parameter is now optional (kept for backward compatibility)

  3. llm_query/ent_surgical_llm_analysis.py

  - Same changes as LLM_analysis.py
  - Updated all functions to use SecureLLM

  4. batch_query/batch_query.py

  - Removed openai import, added securellm_adapter imports
  - parallel_process_llm_cases() now uses query_llm() directly
  - api_key parameters made optional

  5. training/training_LLM.py

  - Updated imports to use securellm_adapter
  - ConversationalLLMAnalyzer class now uses llm_chat() instead of OpenAI client
  - All api_key parameters made optional

  6. pyproject.toml

  - Removed openai dependency (no longer directly used)
  - Added python-dotenv dependency (used by securellm_adapter)

  Usage

  Instead of passing an api_key, set the VAULT_SECRET_KEY environment variable:
 export VAULT_SECRET_KEY="your-vault-key"

  All existing function calls remain backward compatible — the api_key parameter is still accepted but ignored.
  
  

  • I've created a CLI entrypoint for the ENT-LLM project.

    cli.py - CLI entrypoint with the following features:

    All the available models can be found in the securellm repo.
    Usage

    python cli.py --list-models

    python cli.py --model apim:claude-3.7

python cli.py --model apim:llama-3.3-70b --input cases.csv --output results.csv

python cli.py --model apim:gemini-2.5-pro-preview-05-06 --interactive

After installing the package (pip install -e .)

ent-llm --model apim:gpt-4.1 --input cases.csv

Options

  ┌───────────────┬───────┬─────────────────────────────────────────┐                                                                                             
  │     Flag      │ Short │               Description               │                                                                                             
  ├───────────────┼───────┼─────────────────────────────────────────┤                                                                                             
  │ --model       │ -m    │ Select LLM model                        │                                                                                             
  ├───────────────┼───────┼─────────────────────────────────────────┤                                                                                             
  │ --input       │ -i    │ Input CSV file path                     │                                                                                             
  ├───────────────┼───────┼─────────────────────────────────────────┤                                                                                             
  │ --output      │ -o    │ Output CSV file path                    │                                                                                             
  ├───────────────┼───────┼─────────────────────────────────────────┤                                                                                             
  │ --delay       │ -d    │ Delay between API calls (default: 0.2s) │                                                                                             
  ├───────────────┼───────┼─────────────────────────────────────────┤                                                                                             
  │ --interactive │ -I    │ Interactive query mode                  │                                                                                             
  ├───────────────┼───────┼─────────────────────────────────────────┤                                                                                             
  │ --list-models │ -l    │ List available models                   │                                                                                             
  ├───────────────┼───────┼─────────────────────────────────────────┤                                                                                             
  │ --verbose     │ -v    │ Enable verbose logging                  │                                                                                             
  └───────────────┴───────┴─────────────────────────────────────────┘                                      

  
  

📝 Code of Conduct & Contributing Guidelines

By submitting this pull request, you agree to follow our Coding Guidelines:

Vicbi added 14 commits February 1, 2026 15:53
…ault_generation_config, added a local extract_response_content() function since it's not provided by the package, updated error messages to reference the correct package name”
…IMEOUT); added retry logic: up to 3 retries for timeout/connection errors with exponential backoff; handle these errors: TimeoutError, requests.exceptions.Timeout, ReadTimeout, ConnectionError
…ed from the LLM or response received but decision could not be extracted (None); exponential backoff between retries
Introduces a full ablation analysis pipeline to quantify how demographic
variables influence LLM surgical recommendations. Supports per-variable
and grouped demographic exclusions, token-aware case filtering, robust
retry logic, incremental CSV flushing with resume, and stratified sampling.
Includes utilities to compare ablations against baseline decisions with
flip-rate, confidence-shift, and optional ground-truth accuracy analysis.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant