Skip to content

bug; the CAE penalty is incorrect for any non-sigmoid activation #2

@Sycor4x

Description

@Sycor4x

The CAE class uses the relu nonlinearity. https://github.com/avijit9/Contractive_Autoencoder_in_Pytorch/blob/master/CAE_pytorch.py#L61

However, the way the CAE loss is computed is only valid for sigmoid activations. This is easy enough to show: the derivative of the relu activation is 1 for positive inputs and 0 for negative inputs.

The link provided in the docstring for the CAE loss assumes the sigmoid nonlinearity; it's not attempting to derive the contractive penalty in general.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions