Skip to content

Any consideration on why use 4 sp & 32 tp? #74

@ParanoidHW

Description

@ParanoidHW

Hi, authors, great work!
I have a small question on the parallelism. It seems ring attention can hide the communication time under the local attn computation. So why still use more tensor parallelism than sequential parallelism, e.g. 32 tp vs. 4 sp during inference, instead the opposite? since the communication costs caused by TP cannot be ignored or overlapped.
Hope you can answer my question. Many thanks~

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions