Skip to content

Conversation

@adtyavrdhn
Copy link
Contributor

@adtyavrdhn adtyavrdhn commented Dec 6, 2025

Closes #3566


Add Customizable Prompt Configuration via PromptConfig

This PR introduces PromptConfig, a new configuration system for customizing all system-generated messages and tool metadata that PydanticAI sends to models. This provides a clean, extensible interface for overriding any text the framework injects into conversations.

PromptConfig Overview

PromptConfig serves as the central configuration class with two components:

  • PromptTemplates: Customizes system-generated messages (retry prompts, tool return confirmations, validation errors, etc.)
  • ToolConfig: Customizes tool metadata (currently supports tool_descriptions, with plans to add tool argument customization)
from pydantic_ai import Agent, PromptConfig, PromptTemplates, ToolConfig

agent = Agent(
    'openai:gpt-4',
    prompt_config=PromptConfig(
        templates=PromptTemplates(
            validation_errors_retry='Please correct the validation errors and try again.',
            final_result_processed='Result received successfully.',
        ),
        tool_config=ToolConfig(
            tool_descriptions={
                'search_database': 'Search the customer database for user records.',
            }
        ),
    ),
)

PromptTemplates

Allows customization of system-injected messages. Each template can be either:

  • A static string for simple customization
  • A callable (part, RunContext) -> str for dynamic, context-aware messages

Available template fields:

Field Description
final_result_processed Confirmation when a final result is successfully processed
output_tool_not_executed Message when an output tool call is skipped because a result was already found
function_tool_not_executed Message when a function tool call is skipped due to early termination
tool_call_denied Message when a tool call is rejected by an approval handler
validation_errors_retry Message appended to validation errors when asking the model to retry
model_retry_string_tool Message when ModelRetry is raised from a tool
model_retry_string_no_tool Message when ModelRetry is raised outside of a tool context

ToolConfig

Allows overriding tool descriptions at runtime without modifying the original tool definitions. This is useful for providing different descriptions in different contexts or agent runs.

agent = Agent(
    'openai:gpt-4',
    prompt_config=PromptConfig(
        tool_config=ToolConfig(
            tool_descriptions={'my_tool': 'Custom description for this agent context'}
        )
    ),
)

Future plans: Add support for customizing tool argument descriptions.

Integration Points

The prompt_config parameter is integrated throughout the agent system:

  • Agent constructor: Agent(..., prompt_config=...)
  • Run methods: agent.run(..., prompt_config=...) — allows per-run overrides
  • All execution modes: run(), run_sync(), run_stream(), run_stream_events(), iter()
  • Override context manager: agent.override(prompt_config=...)
  • All agent types: Standard Agent, WrappedAgent, DBOSAgent, PrefectAgent, TemporalAgent

Templates are applied right before messages are sent to the model.


Additional Change: return_kind field on ToolReturnPart

Added a new return_kind field to ToolReturnPart that provides contextual visibility into how a tool call was resolved:

Value Description
'tool-executed' Tool ran successfully and produced a return value
'final-result-processed' An output tool produced the run's final result
'output-tool-not-executed' An output tool was skipped because a final result already existed
'function-tool-not-executed' A function tool was skipped due to early termination after a final result
'tool-denied' The tool call was rejected by an approval handler

This field enables PromptTemplates to apply the appropriate template based on the tool return context, and provides useful debugging information when inspecting message history.


@adtyavrdhn adtyavrdhn changed the title Allow customization of tool validation error messages Allow customization of all hard coded strings sent to the model Dec 12, 2025
@adtyavrdhn adtyavrdhn changed the title Allow customization of all hard coded strings sent to the model [WIP] Allow customization of all hard coded strings sent to the model Dec 13, 2025
@adtyavrdhn adtyavrdhn marked this pull request as draft December 13, 2025 04:31
@adtyavrdhn
Copy link
Contributor Author

Moving to draft, more of a WIP because I have to figure out how to keep it extendible for prompt optimizers use and how to make instructions, tool desc etc also overridable(if not in this PR then at least provide the interface and arch for it)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Allow customization of tool validation error messages

2 participants