Skip to content

How to save the model with low-precision? #41

@fantexibaba

Description

@fantexibaba

I have validated the quantification function of INQ.It works very well.But the final stored model is still a 32-bit floating-point number.I want to know how to get the low-precision model in the training process and look at it.
Thanks a lot.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions