The CAE class uses the relu nonlinearity. https://github.com/avijit9/Contractive_Autoencoder_in_Pytorch/blob/master/CAE_pytorch.py#L61
However, the way the CAE loss is computed is only valid for sigmoid activations. This is easy enough to show: the derivative of the relu activation is 1 for positive inputs and 0 for negative inputs.
The link provided in the docstring for the CAE loss assumes the sigmoid nonlinearity; it's not attempting to derive the contractive penalty in general.
The CAE class uses the relu nonlinearity. https://github.com/avijit9/Contractive_Autoencoder_in_Pytorch/blob/master/CAE_pytorch.py#L61
However, the way the CAE loss is computed is only valid for sigmoid activations. This is easy enough to show: the derivative of the relu activation is 1 for positive inputs and 0 for negative inputs.
The link provided in the docstring for the CAE loss assumes the sigmoid nonlinearity; it's not attempting to derive the contractive penalty in general.