Skip to content

[bug] linearBlockMinifloat backward has arguments which are not passed #262

@johanjino

Description

@johanjino

Hi, I was working on the adls labs when I found out that the linearBlockMinifloat has backward which takes arguments same as the forward pass:

def backward(
ctx,
grad_output: Tensor,
width: int,
exponent_width: int,
exponent_bias_width: int,
block_size: list[int] | int = [16],
skip_first_dim: bool = False,
):

Other precision backward passes do not have this arguments. Further, torch when calculating backprop wont pass these arguments, hence causing Missing 3 positional arguments error (Block size has default). This is either a bug, or have not been implemented yet, this precision hence cannot be used for QAT (only backward is broken).

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions