Skip to content

Conversation

Pouyanpi
Copy link
Collaborator

@Pouyanpi Pouyanpi commented Oct 2, 2025

Implements flexible LLM parameter transformation to support provider-specific naming conventions (e.g., max_tokens -> max_new_tokens for HuggingFace).

Key features:

  • Automatic provider inference from LangChain module names
  • Built-in mappings for HuggingFace and Google Vertex AI
  • Custom parameter mapping support via config
  • Parameter dropping capability (map to null)
  • Integration with llm_call function

TODO:

  • see if we can reduce the parameter transformation overhead by only doing it when parameter_mapping is given or provider is in the default ones. (nothing critical)

@Pouyanpi Pouyanpi added this to the v0.18.0 milestone Oct 2, 2025
@Pouyanpi Pouyanpi added the enhancement New feature or request label Oct 2, 2025
@Pouyanpi Pouyanpi self-assigned this Oct 2, 2025
@Pouyanpi Pouyanpi requested a review from Copilot October 2, 2025 13:36
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR implements a flexible parameter mapping system for LLM providers to handle parameter naming inconsistencies across different providers (e.g., transforming max_tokens to max_new_tokens for HuggingFace).

  • Provider-agnostic parameter transformation with automatic provider inference from LangChain module names
  • Built-in mappings for HuggingFace and Google Vertex AI with support for custom mappings
  • Integration with the llm_call function and LLMRails configuration system

Reviewed Changes

Copilot reviewed 7 out of 7 changed files in this pull request and generated 1 comment.

Show a summary per file
File Description
nemoguardrails/llm/parameter_mapping.py Core parameter mapping module with provider inference and transformation logic
nemoguardrails/rails/llm/config.py Adds parameter_mapping field to Model configuration
nemoguardrails/rails/llm/llmrails.py Registers parameter mappings during LLM initialization
nemoguardrails/actions/llm/utils.py Integrates parameter transformation into llm_call function
tests/test_parameter_mapping.py Comprehensive tests for parameter mapping functionality
tests/test_llmrails.py Tests for parameter mapping registration in LLMRails
tests/test_llm_call_parameter_mapping.py Tests for parameter mapping integration in llm_call

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

@Pouyanpi Pouyanpi force-pushed the feat/llm-params-translation branch from ee4e724 to 2caccf6 Compare October 2, 2025 13:41
Implements flexible LLM parameter transformation to support
provider-specific naming conventions (e.g., max_tokens -> max_new_tokens
for HuggingFace)
@Pouyanpi Pouyanpi force-pushed the feat/llm-params-translation branch from 2caccf6 to c6a40a7 Compare October 3, 2025 10:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant