Skip to content

Python: Bug: Python: Incompatibility between yaml response_format and AzureAIInference #12290

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
mathrb opened this issue May 28, 2025 · 1 comment
Assignees
Labels
ai connector Anything related to AI connectors bug Something isn't working python Pull requests for the Python Semantic Kernel

Comments

@mathrb
Copy link

mathrb commented May 28, 2025

Describe the bug
Declaring the response_format in a yaml file does not work with AzureAIInference.

To Reproduce
Steps to reproduce the behavior:
Create a yaml function:

name: DummyPrompt
description: hello
template_format: jinja2
execution_settings:
  azure_ai_inference:
    max_tokens: 100
    temperature: 0.0
    response_format: 
      type: json_schema
      json_schema:
        name: dummy_format
        schema:
          type: object
          properties:
           message: {type: string}
template: |
  <message role="system">
    Your goal is to return the message of the user as JSON.
  </message>
  <message role="user">
    {{ message }}
  </message>

Try to invoke the function with AzureAIInferenceChatCompletion.

SemanticKernel, and more specificaly AzureAIInferenceChatPromptExecutionSettings expects that response_format to be {"type": "json_schema", "json_schema": {}} link
The problem lies in the fact that the response_format is passed directly to AzureAIInferenceChatCompletion
Therefore, the JsonSchemaFormat cannot be created because it contains {"type": "json_schema", "json_schema": {}} instead of the content of json_schema

Expected behavior
The json_schema is correctly passed when read from dict.

Platform

  • Language: Python
  • Source: semantic-kernel==1.32.0 azure-ai-inference==1.0.0b9
  • AI model: N/A, the problem arise before actually calling the model

Additional context

In the code:
# Case 4: response_format is a dictionary (legacy), create JsonSchemaFormat from dict
Does it mean that using YAML isn't intended anymore, or am I missing something?

@mathrb mathrb added the bug Something isn't working label May 28, 2025
@markwallace-microsoft markwallace-microsoft added python Pull requests for the Python Semantic Kernel triage labels May 28, 2025
@github-actions github-actions bot changed the title Bug: Python: Incompatibility between yaml response_format and AzureAIInference Python: Bug: Python: Incompatibility between yaml response_format and AzureAIInference May 28, 2025
@moonbox3
Copy link
Contributor

It's a bug, @mathrb. I'll work on fixing it.

@moonbox3 moonbox3 self-assigned this May 29, 2025
@moonbox3 moonbox3 added ai connector Anything related to AI connectors and removed triage labels May 29, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ai connector Anything related to AI connectors bug Something isn't working python Pull requests for the Python Semantic Kernel
Projects
Status: No status
Development

No branches or pull requests

3 participants