I just noticed that when using pyright in python 3.13 mode, all of my frozen dataclass generic variable variance inference was returning invariant, whereas in all previous versions it worked as expected.
After some digging, I discovered that it seems this is due to the introduction of __replace__, as discussed in this issue python/mypy#17623 and this thread https://discuss.python.org/t/make-replace-stop-interfering-with-variance-inference/96092/20.
This is really unfortunate, I'm writing some code for a typesafe control flow graph library in pydantic/pydantic-ai#2982 and it relies heavily on generics and variance behavior, and it's rather unfortunate if I can't use dataclasses anywhere if I want type-checking compatibility with python 3.13.
I suppose I could replace the infer_variance=True with use of covariant=True etc. in typevars, but variance inference has actually been very helpful while developing this — there's enough surface area that it's easy to miss a place where variance soundness was broken.
Is there any way to work around this new limitation for variance inference for frozen dataclasses today (or expected to be in the near future)? (E.g., something like explicitly setting __replace__ = None on the class, though I tried that specifically and it didn't work.) If not, I suppose I'll drop the use of dataclasses throughout the code I'm adding but I'd really rather not do that. But it's a hard requirement that the variance is correct for the relevant classes, and I very strongly prefer to be able to rely on variance inference because it's so easy to make mistakes. (Not to mention, in the future when we can use the new syntax from PEP695, we'll be able to drop all the explicit typevar creation.)
I just noticed that when using pyright in python 3.13 mode, all of my frozen dataclass generic variable variance inference was returning invariant, whereas in all previous versions it worked as expected.
After some digging, I discovered that it seems this is due to the introduction of
__replace__, as discussed in this issue python/mypy#17623 and this thread https://discuss.python.org/t/make-replace-stop-interfering-with-variance-inference/96092/20.This is really unfortunate, I'm writing some code for a typesafe control flow graph library in pydantic/pydantic-ai#2982 and it relies heavily on generics and variance behavior, and it's rather unfortunate if I can't use dataclasses anywhere if I want type-checking compatibility with python 3.13.
I suppose I could replace the
infer_variance=Truewith use ofcovariant=Trueetc. in typevars, but variance inference has actually been very helpful while developing this — there's enough surface area that it's easy to miss a place where variance soundness was broken.Is there any way to work around this new limitation for variance inference for frozen dataclasses today (or expected to be in the near future)? (E.g., something like explicitly setting
__replace__ = Noneon the class, though I tried that specifically and it didn't work.) If not, I suppose I'll drop the use of dataclasses throughout the code I'm adding but I'd really rather not do that. But it's a hard requirement that the variance is correct for the relevant classes, and I very strongly prefer to be able to rely on variance inference because it's so easy to make mistakes. (Not to mention, in the future when we can use the new syntax from PEP695, we'll be able to drop all the explicit typevar creation.)