Skip to content

build: update ty to v0.0.16 and align type-checking config#535

Open
rensortino wants to merge 10 commits intoPrunaAI:mainfrom
rensortino:build/update-ty
Open

build: update ty to v0.0.16 and align type-checking config#535
rensortino wants to merge 10 commits intoPrunaAI:mainfrom
rensortino:build/update-ty

Conversation

@rensortino
Copy link

Description

Updates ty integration and type-checking configuration, and fixes a few type-related issues to keep ty check and existing APIs working cleanly across pruna.

Tooling / Build

  • Bump dev dependency ty to 0.0.16.
  • Narrow pre-commit ty hook to uvx ty check src/pruna (avoids checking non-package paths like tests/).
  • Add ty ignore rules for possibly-missing-attribute and unused-type-ignore-comment; rename possibly-unbound-importpossibly-missing-import.

Fixes / Compatibility

  • Align get_model_dependent_hyperparameter_defaults(...) signatures with the parent method by accepting SmashConfig | SmashConfigPrefixWrapper across multiple algorithms.
  • Fix deprecated detection in SmashConfig by using SMASH_SPACE.values() rather than get_hyperparameters().

Type-check cleanups

  • Add targeted type: ignore[...] where ty flags known-safe patterns (tokenizer token conversion indexing, deprecated torch.quantization.quantize_dynamic usage, etc.).

Related Issue

Fixes #531

Type of Change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

How Has This Been Tested?

  • pre-commit run -a
  • uvx ty check src/pruna

Checklist

  • My code follows the style guidelines of this project
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing unit tests pass locally with my changes

Copy link
Collaborator

@gsprochette gsprochette left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for taking care of this. It's almost ready to merge but some changes have conflicts with #520 :

  • The get_model_dependent_hyperparameter_defaults method signature was changed so you can rebase and drop the changes in hqq, hqq_diffusers, llm_compressor and torchao
  • for sage_attn it's the same, but I missed it in the PR. I fixed that in #540. You can rebase onto this PR and drop the changes in this file as well.

Also 0.0.17 is out so let's use it :)

@@ -130,8 +130,8 @@ def model_check_fn(self, model: Any) -> bool:
return any(isinstance(attr_value, tuple(transformer_and_unet_models)) for attr_value in model.__dict__.values())

def get_model_dependent_hyperparameter_defaults(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The signature for get_model_dependent_hyperparameter_defaults changed in #520 :

  • the smash_config argument is always a SmashConfigPrefixWrapper
  • the output type is a dict[str, Any] where keys are hyperparameter names and values are their associated default value. Unfortunately, TARGET_MODULES_TYPE fits that type so some algorithms haven't been properly updated.

In all cases, these signature changes shouldn't be made for algorithms that return e.g. {"target_modules": some_TARGET_MODULES_TYPE_value}.

self,
model: Any,
smash_config: SmashConfigPrefixWrapper,
smash_config: SmashConfig | SmashConfigPrefixWrapper,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There shouldn't be any change in this file, but there is indeed a type problem here since the file hasn't been updated when changing the return type for the base class' method. I opened a quick PR #540 fixing exactly this

pyproject.toml Outdated
"coverage",
"docutils",
"ty==0.0.1a21",
"ty==0.0.16",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we even use 0.0.17 which came out a few days ago please?

possibly-missing-import = "ignore"
possibly-missing-attribute = "ignore"
missing-argument = "ignore"
unused-type-ignore-comment = "ignore"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I fully agree that we should ignore the comments for now, and I'll go through the code in a future PR to remove those ignore statements one by one because this frankly isn't checking anything...

@rensortino
Copy link
Author

Thanks for the feedback! I updated the version to 0.0.17 and integrated the latest changes.

I had to add ignore comments in the new signature of get_model_dependent_hyperparameter_defaults because apparently ty does not resolve the variable TARGET_MODULES_TYPE and raises an error of incompatible override method.

@gsprochette
Copy link
Collaborator

ty is right to flag for an incompatible override method. As I tried to explain, the output of get_model_dependent_hyperparameter_defaults has changed: where we used to return target_modules with type TARGET_MODULES_TYPE, we now return {"target_modules": target_modules} with type dict[str, Any]. In this example, it turns out that the return type is dict[str, TARGET_MODULES_TYPE], but in general this method could return defaults for multiple hyperparameters with different types, hence the Any.

Could you revert all changes that you made concerning this method in this PR, and then re-request review please?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG] Ty pre-commit hook always fails

3 participants

Comments