Skip to content

Implementation does not learn inter/intra-series dependencies as claimed #12

@jackyue1994

Description

@jackyue1994

According to the code in the repository, FreMLP operates along the embed_size dimension, remaining independent of both channel and time dimensions. Consequently, it fails to learn inter-series and intra-series dependencies, which contradicts the main claim presented in the original paper.

self.r1 = nn.Parameter(self.scale * torch.randn(self.embed_size, self.embed_size))
def FreMLP(self, B, nd, dimension, x, r, i, rb, ib):
    o1_real = torch.zeros([B, nd, dimension // 2 + 1, self.embed_size],
                          device=x.device)
    o1_imag = torch.zeros([B, nd, dimension // 2 + 1, self.embed_size],
                          device=x.device)

    o1_real = F.relu(
        torch.einsum('bijd,dd->bijd', x.real, r) -
        torch.einsum('bijd,dd->bijd', x.imag, i) +
        rb
    )

    o1_imag = F.relu(
        torch.einsum('bijd,dd->bijd', x.imag, r) +
        torch.einsum('bijd,dd->bijd', x.real, i) +
        ib
    )

    y = torch.stack([o1_real, o1_imag], dim=-1)
    y = F.softshrink(y, lambd=self.sparsity_threshold)
    y = torch.view_as_complex(y)
    return y

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions