Skip to content

Benchmarks for perf #1756

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 21 commits into
base: main
Choose a base branch
from
Open

Conversation

ollz272
Copy link
Contributor

@ollz272 ollz272 commented Jul 17, 2025

Change Summary

Just a quick follow up to the feature to give some benchmarks to test performance. I saw about a 3% improvement locally when building with these

Related issue number

Checklist

  • Unit tests for the changes exist
  • Documentation reflects the changes where applicable
  • Pydantic tests pass with this pydantic-core (except for expected changes)
  • My PR is ready to review, please add a comment including the phrase "please review" to assign reviewers

@ollz272 ollz272 marked this pull request as ready for review July 17, 2025 15:09
Copy link

codecov bot commented Jul 17, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

📢 Thoughts on this report? Let us know!

Copy link

codspeed-hq bot commented Jul 17, 2025

CodSpeed Performance Report

Merging #1756 will degrade performances by 28.55%

Comparing ollz272:benchmarks-for-perf (63da35b) with main (37ec6e7)

Summary

❌ 1 regressions
✅ 156 untouched benchmarks
🆕 2 new benchmarks

⚠️ Please fix the performance issues or acknowledge them on CodSpeed.

Benchmarks breakdown

Benchmark BASE HEAD Change
test_strict_union_error_core 32.4 µs 45.4 µs -28.55%
🆕 test_datetime_milliseconds N/A 25.3 µs N/A
🆕 test_datetime_seconds N/A 25 µs N/A

Copy link
Contributor

@davidhewitt davidhewitt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just needs the groups adjusting, otherwise lgtm, thanks!

@@ -319,7 +319,6 @@ def r():
fs_model_serializer.to_python(m, exclude_unset=True)


@pytest.mark.benchmark(group='model-list-json')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like this line was unintentionally removed to the below, needs reverting

@@ -360,6 +359,33 @@ def r():
v.to_python(d, mode='json')


@pytest.mark.benchmark(group='model-list-json')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably the benchmark group on all these temporal tests should be adjusted, maybe group='temporal'?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants