Skip to content

Encoder-Decoder Architecture #44

@ClashLuke

Description

@ClashLuke

Currently, our model can be either an encoder or a decoder. Combining these two, as in T5, is not possible. The best approximation we could get at the moment would be to expand the context of our decoder, but the performance of a decoder-only model isn't as good. Ideally, we could run full "attention" for one part and sample autoregressive for the other.
This issue discusses ideas for implementing such a scheme and benchmarking it against the baseline fully-autoregressive model.

Metadata

Metadata

Assignees

No one assigned

    Labels

    MLRequires machine-learning knowledge (can be built up on the fly)coreImproves core model while keeping core idea intactresearchCreative project that might fail but could give high returns

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions