Skip to content

Conversation

@riedgar-ms
Copy link
Contributor

Description

The OpenAI Responses API provides for structured output with JSON schema. This change:

  • Fixes support in OpenAIResponsesTarget when JSON output is requested without a schema
  • Allows the user to supply a schema via prompt_metadata
  • Adds a test for each scenario

Tests and Documentation

I have added a test for both scenarios (i.e. with and without schema). However, this is not yet documented.


text_format = None
if is_json_response:
if conversation[-1].message_pieces[0].prompt_metadata.get("json_schema"):
Copy link
Contributor Author

@riedgar-ms riedgar-ms Nov 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not, to put it mildly, a fan of smuggling in the schema this way. Two reasons:

  • The schema can be loaded as a Python object. However, doing so would require a change to MessagePiece
  • Even if leaving it as a string, then it should be extracted by the is_response_format_json() method

However, both of these have a rather larger blast radius, so I wanted to consult first @romanlutz @rlundeen2

"input": input_items,
# Correct JSON response format per Responses API
"response_format": {"type": "json_object"} if is_json_response else None,
"text": text_format,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the 'bug fix' part; response_format is from the Chat completions API. See:

https://platform.openai.com/docs/api-reference/responses/create#responses_create-text

@romanlutz
Copy link
Contributor

Do you expect a different schema per message sent, or just one schema for one target? I think that's the main decision point. If it's the latter then we can pass it to the target. If it's the former that makes it trickier and I would probably suggest something like what you're doing here.

@riedgar-ms
Copy link
Contributor Author

Schema is per-response. You might want different JSONs as a conversation develops, and you also probably don't want a JSON every time. This came out of finding that the SelfAskScorer wasn't working with an OpenAIResponsesTarget; I've not tracked down that prompt yet (ISTR YAML files), but for all of them, anything which asks for a JSON response could have the required schema stored there too, and passed along as needed.

I was wondering if I could store the schema as a Python object (which would be serialisable to JSON), but I've not dug through things enough to know if prompt_metadata has to have str or int values.

@romanlutz
Copy link
Contributor

Re: Response target not working with the scorer

We don't currently enforce a json format, we just use the other option (I don't recall the name offhand, but switching to schema was definitely on our list of things to do). What exactly is failing currently?

Maybe this should be a new parameter on a message piece... curious what @rlundeen2 thinks.

@riedgar-ms
Copy link
Contributor Author

The specific bug is that the Responses API doesn't use response_format in the body, but the alternative I show. There are also two options: json_object which doesn't enforce a schema (although may enforce JSON output; I haven't dug enough through the code to be sure), and json_schema where you can pass an actual JSON schema object.

@romanlutz
Copy link
Contributor

The specific bug is that the Responses API doesn't use response_format in the body, but the alternative I show. There are also two options: json_object which doesn't enforce a schema (although may enforce JSON output; I haven't dug enough through the code to be sure), and json_schema where you can pass an actual JSON schema object.

Yes we support json_object although it's more of a suggestion than enforced output format. We noticed json_schema a while ago but the problem was always how to pass the schema. I'll ping @rlundeen2 and @bashirpartovi internally to see if anyone has thoughts.

@riedgar-ms
Copy link
Contributor Author

@romanlutz @rlundeen2 , I've just pushed changes which modify is_response_format_json() so that if a JSON schema is supplied, then it will return that schema. Now I'm not a huge fan of functions having different return types like this, but changing the return type to a tuple didn't feel much better.

I think I've caught all the places this could have affected.....

@bashirpartovi
Copy link
Contributor

Thanks @riedgar-ms for helping fix this issue. However, I do not agree with this approach because it violates least surprise principle and does not provide a clean design. I believe all of this can be encapsulated much cleaner (this is just to demonstrate an approach):

We could define a class to encapsulate json response configurations such as follows:

from __future__ import annotations
...

@dataclass
class JsonResponseConfig:

    enabled: bool = False
    schema: Optional[Dict[str, Any]] = None
    schema_name: str = "CustomSchema"
    strict: bool = True
    
    @classmethod
    def from_metadata(cls, metadata: Optional[Dict[str, Any]]) -> "JsonResponseConfig":
        if not metadata:
            return cls(enabled=False)
            
        response_format = metadata.get("response_format")
        if response_format != "json":
            return cls(enabled=False)
            
        schema_val = metadata.get("json_schema")
        if schema_val:
            if isinstance(schema_val, str):
                try:
                    schema = json.loads(schema_val) if schema_val else None
                except json.JSONDecodeError:
                    raise ValueError(f"Invalid JSON schema provided: {schema_val}")
            else:
                schema = schema_val
                
            return cls(
                enabled=True,
                schema=schema,
                schema_name=metadata.get("schema_name", "CustomSchema"),
                strict=metadata.get("strict", True)
            )
        
        return cls(enabled=True)

We also need to add a new method to the prompt chat target to handle retrieving the json config:

class PromptChatTarget(PromptTarget):
    ...

    def is_response_format_json(self, *, message_piece: MessagePiece) -> bool:
        config = self.get_json_response_config(message_piece=message_piece)
        return config.enabled

    def get_json_response_config(self, *, message_piece: MessagePiece) -> JsonResponseConfig:
        config = JsonResponseConfig.from_metadata(message_piece.prompt_metadata)
        
        if config.enabled and not self.is_json_response_supported():
            target_name = self.get_identifier()["__type__"]
            raise ValueError(f"This target {target_name} does not support JSON response format.")
            
        return config

We also need to change the base target for open ai to update the signature with the json config class:

class OpenAIChatTargetBase(OpenAITarget, PromptChatTarget):
    ...

    async def _construct_request_body(
        self,
        *,
        conversation: MutableSequence[Message],
        json_config: JsonResponseConfig
    ) -> dict:
        raise NotImplementedError

OpenAI chat target becomes:

class OpenAIChatTarget(OpenAIChatTargetBase):
    ...
    async def _construct_request_body(
        self,
        *,
        conversation: MutableSequence[Message],
        json_config: JsonResponseConfig
    ) -> dict:
        messages = await self._build_chat_messages_async(conversation)
        response_format = self._build_response_format(json_config)
        
        body_parameters = {
            "model": self._model_name,
            "max_completion_tokens": self._max_completion_tokens,
            "temperature": self._temperature,
            "top_p": self._top_p,
            "frequency_penalty": self._frequency_penalty,
            "presence_penalty": self._presence_penalty,
            "logit_bias": self._logit_bias,
            "stream": False,
            "seed": self._seed,
            "n": self._n,
            "messages": messages,
            "response_format": response_format,
        }

        if self._extra_body_parameters:
            body_parameters.update(self._extra_body_parameters)

        return {k: v for k, v in body_parameters.items() if v is not None}
    
    def _build_response_format(self, json_config: JsonResponseConfig) -> Optional[Dict[str, Any]]:
        if not json_config.enabled:
            return None
            
        if json_config.schema:
            return {
                "type": "json_schema",
                "json_schema": {
                    "name": json_config.schema_name,
                    "schema": json_config.schema,
                    "strict": json_config.strict
                }
            }
        
        return {"type": "json_object"}

Open AI response target:

class OpenAIResponseTarget(OpenAIChatTargetBase):
    ...

    async def _construct_request_body(
        self,
        *,
        conversation: MutableSequence[Message],
        json_config: JsonResponseConfig
    ) -> dict:
        input_items = await self._build_input_for_multi_modal_async(conversation)

        text_format = self._build_text_format(json_config)
        
        body_parameters = {
            "model": self._model_name,
            "max_output_tokens": self._max_output_tokens,
            "temperature": self._temperature,
            "top_p": self._top_p,
            "stream": False,
            "input": input_items,
            "text": text_format,
        }

        if self._extra_body_parameters:
            body_parameters.update(self._extra_body_parameters)

        return {k: v for k, v in body_parameters.items() if v is not None}
    
    def _build_text_format(self, json_config: JsonResponseConfig) -> Optional[Dict[str, Any]]:
        if not json_config.enabled:
            return None
            
        if json_config.schema:
            return {
                "format": {
                    "type": "json_schema",
                    "json_schema": {
                        "name": json_config.schema_name,
                        "schema": json_config.schema,
                        "strict": json_config.strict
                    }
                }
            }
        
        logger.info("Using json_object format without schema - consider providing a schema for better results")
        return {"format": {"type": "json_object"}}

OpenAI chat target base:

class OpenAIChatTargetBase(OpenAITarget, PromptChatTarget):
   ...
    
    async def send_prompt_async(self, *, message: Message) -> Message:
        self._validate_request(message=message)
        conversation = self._get_conversation(message=message)
        
        json_config = JsonResponseConfig(enabled=False)
        if message.message_pieces:
            last_piece = message.message_pieces[-1]
            json_config = self.get_json_response_config(message_piece=last_piece)
        
        request_body = await self._construct_request_body(
            conversation=conversation,
            json_config=json_config
        )
        .... 
        # rest of the method

@riedgar-ms
Copy link
Contributor Author

@bashirpartovi happy to make changes along those lines

@rlundeen2
Copy link
Contributor

Agreed with this approach Bashir suggested. I know it's a bigger change but I think will be a lot more maintainable.

@riedgar-ms riedgar-ms marked this pull request as draft November 15, 2025 16:01
@riedgar-ms riedgar-ms marked this pull request as ready for review November 16, 2025 20:33
@riedgar-ms
Copy link
Contributor Author

@bashirpartovi @rlundeen2 I've made those changes - very happy to introduce a new type for this; my previous changes were trying to be minimal :-)

I think that the code is now ready for re-review; if you're OK with it, I can see about adding some documentation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants