Skip to content

Conversation

@brightsparc
Copy link

@brightsparc brightsparc commented Nov 12, 2025

This PR includes support for GrokModel which uses the xai-sdk-python library, which supports chat completion, tool calling, and built-in server side tool calls.

I have included a series of tests, as well as an example for a stock analysis agent that uses search and code execution, and returns results in the form of a local tool call.

python examples/pydantic_ai_examples/stock_analysis_agent.py

🔍 Starting stock analysis...

Query: What was the hottest performing stock on NASDAQ yesterday?

18:32:37.095 stock_analysis_agent run
18:32:37.099   chat grok-4-fast
18:32:37.119     chat.stream grok-4-fast
🔧 Server-side tool: web_search
🔧 Server-side tool: browse_page

✅ Analysis complete!

📊 Top Stock: ATGL
💰 Current Price: $20.00

📈 Buy Analysis:
ATGL surged 139.84% on November 18, 2025, making it the hottest performer on NASDAQ, but pulled back 7.45% today to $20.00. The extreme volatility suggests high risk; without clear fundamental drivers, it's not recommended as a buy for most investors. Suitable only for speculative short-term trades.

📊 Usage Statistics:
   Requests: 1
   Input Tokens: 19150
   Output Tokens: 526
   Total Tokens: 19676
   Server-Side Tools: 7

Below is what the output looks like in logfire. It includes custom spans emmited from the xai-sdk.

image

Cost is coming back as Unknown so it would be good to understand how we can populate this.

@brightsparc brightsparc marked this pull request as draft November 12, 2025 01:04
@DouweM
Copy link
Collaborator

DouweM commented Nov 12, 2025

@brightsparc Thanks Julian! Let me know when this is ready for review or if you have any questions.

@brightsparc brightsparc marked this pull request as ready for review November 19, 2025 02:36
@brightsparc brightsparc changed the title Initial commit for xAI grok model xAI grok model with tests and stock analysis agent Nov 19, 2025
@brightsparc
Copy link
Author

brightsparc commented Nov 19, 2025

Due to using otel in our own SDK, I did notice an error when running the pydantic ai code in async.io loop:

Failed to detach context
Traceback (most recent call last):
  File "/Users/julian/workspace/poc/pydantic-ai/.venv/lib/python3.12/site-packages/opentelemetry/context/__init__.py", line 155, in detach
    _RUNTIME_CONTEXT.detach(token)
  File "/Users/julian/workspace/poc/pydantic-ai/.venv/lib/python3.12/site-packages/opentelemetry/context/contextvars_context.py", line 53, in detach
    self._current_context.reset(token)
ValueError: <Token var=<ContextVar name='current_context' default={} at 0x1055d33d0> at 0x109ca6e00> was created in a different Context

We could disable our tracing with XAI_SDK_DISABLE_TRACING=1, but would be good to understand why this happens.

@DouweM
Copy link
Collaborator

DouweM commented Nov 19, 2025

Cost is coming back as Unknown so it would be good to understand how we can populate this.

@brightsparc Looks like grok-4-fast is not on https://github.com/pydantic/genai-prices/blob/main/prices/providers/x_ai.yml yet. Contribution welcome! (See contrib docs)


Failing tests:

│  tests/test_examples.py             │  test_docs_examples[docs/models/grok.md:34]  │  113            │  256         │  ValueError      │

I believe we just need to add the xAI env var to the list here:

env.set('MOONSHOTAI_API_KEY', 'testing')
env.set('DEEPSEEK_API_KEY', 'testing')
env.set('OVHCLOUD_API_KEY', 'testing')
env.set('PYDANTIC_AI_GATEWAY_API_KEY', 'testing')

│  tests/test_examples.py             │  test_docs_examples[docs/models/grok.md:56]  │  113            │              │                  │

This one should be fixed by running uv run pytest --update-examples tests/test_examples.py -k grok.md

│  tests/models/test_instrumented.py  │  test_instrumented_model_stream_break        │  484            │  512         │  AssertionError  │

Looks like we need the new logfire.exception.fingerprint there as well. Not 100% sure why we're suddenly seeing that...

Due to using otel in our own SDK, I did notice an error when running the pydantic ai code in async.io loop:

Do you have example code for me that triggers it?

@brightsparc
Copy link
Author

Cost is coming back as Unknown so it would be good to understand how we can populate this.

@brightsparc Looks like grok-4-fast is not on https://github.com/pydantic/genai-prices/blob/main/prices/providers/x_ai.yml yet. Contribution welcome! (See contrib docs)

I will make a separate PR here

Failing tests:

│  tests/test_examples.py             │  test_docs_examples[docs/models/grok.md:34]  │  113            │  256         │  ValueError      │

I believe we just need to add the xAI env var to the list here:

env.set('MOONSHOTAI_API_KEY', 'testing')
env.set('DEEPSEEK_API_KEY', 'testing')
env.set('OVHCLOUD_API_KEY', 'testing')
env.set('PYDANTIC_AI_GATEWAY_API_KEY', 'testing')

│  tests/test_examples.py             │  test_docs_examples[docs/models/grok.md:56]  │  113            │              │                  │

Added this:

This one should be fixed by running uv run pytest --update-examples tests/test_examples.py -k grok.md

│  tests/models/test_instrumented.py  │  test_instrumented_model_stream_break        │  484            │  512         │  AssertionError  │

Not sure I follow this

Looks like we need the new logfire.exception.fingerprint there as well. Not 100% sure why we're suddenly seeing that...

Due to using otel in our own SDK, I did notice an error when running the pydantic ai code in async.io loop:

Do you have example code for me that triggers it?

You can run the following command to repo that error:

XAI_API_KEY="a-valid-key" uv run python examples/pydantic_ai_examples/stock_analysis_agent.py

@brightsparc brightsparc changed the title xAI grok model with tests and stock analysis agent XaiModel with tests and stock analysis agent Nov 20, 2025
@github-actions
Copy link

github-actions bot commented Dec 9, 2025

This PR is stale, and will be closed in 3 days if no reply is received.

Copy link
Collaborator

@DouweM DouweM left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@brightsparc Thanks for your patience here! Left some more feedback, and can you please look at the merge conflicts with main?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's remove this example

from ..profiles import ModelProfileSpec
from ..profiles.grok import GrokModelProfile
from ..providers import Provider, infer_provider
from ..providers.xai import XaiModelName
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's move this inside try/except ImportError above as it also depends on the SDK

See [xAI SDK documentation](https://docs.x.ai/docs) for more details on these parameters.
"""

xai_logprobs: bool
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have logprobs on the base ModelSettings so no need to include it here.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also note that we're not currently giving the user a way to access the logprobs. In OpenAI models, we do this using ModelResponse.provider_details['logprobs'] or TextPart.provider_details['logprobs'].

If we list xAI under supported providers for ModelSettings.logprobs, we should make sure we do that.

Corresponds to the `web_search_call.action.sources` value of the `include` parameter in the Responses API.
"""

xai_include_x_search_outputs: bool
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

x_search isn't supported yet so let's leave this out for now


if tool_call.status == chat_types.chat_pb2.ToolCallStatus.TOOL_CALL_STATUS_COMPLETED:
# Tool completed - emit return part with result
tool_result_content = XaiModel.get_tool_result_content(response.content)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See above, it confuses me that response.content is both text, and the builtin tool return value

if tool_call.status == chat_types.chat_pb2.ToolCallStatus.TOOL_CALL_STATUS_COMPLETED:
# Tool completed - emit return part with result
tool_result_content = XaiModel.get_tool_result_content(response.content)
return_part = BuiltinToolReturnPart(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As in non-streaming, we probably need a BuiltinToolCallPart as well

# xAI supports JSON object output
supports_json_object_output=True,
# Default to 'native' for structured output since xAI supports it well
default_structured_output_mode='native',
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's default to tool in line with other models (meaning we can drop this field as that's already the default)



# https://docs.x.ai/docs/models
XaiModelName = str | AllModels
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With other providers, we have this in the model file instead, so let's move it just for consistency, unless there's a particular reason we cant

* OpenAI (some models, not o1)
* Groq
* Anthropic
* Grok
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can drop this one right

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants