Skip to content

Conversation

@wanfengcxz
Copy link
Collaborator

@wanfengcxz wanfengcxz commented Nov 28, 2025

feature adapter for mcoplib, it is a metax office ops library
support model list:
Qwen3 8B
Qwen3 32B
Qwen3 30BA3B(only tp)
Qwen2.5VL 8B
Qwen3VL 8B
InternVL2.5 8B
InternVL3.5 8B
InternVL3.5 38B(only tp)
InternS1 mini
multi round chat and multi batch is ok

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot encountered an error and was unable to review this pull request. You can try again by re-requesting a review.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 3 out of 6 changed files in this pull request and generated 2 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

CMakeLists.txt Outdated
add_subdirectory(dlinfer/vendor/${DEVICE})
add_subdirectory(dlinfer/graph/dicp/vendor)

install(CODE "message(STATUS \"Install completed for device: ${DEVICE}\")")
Copy link

Copilot AI Dec 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Trailing whitespace at end of line. Remove the trailing spaces after the closing parenthesis and double quote.

Suggested change
install(CODE "message(STATUS \"Install completed for device: ${DEVICE}\")")
install(CODE "message(STATUS \"Install completed for device: ${DEVICE}\")")

Copilot uses AI. Check for mistakes.
import dlinfer.framework.transformers_ext
import dlinfer.framework.torch_npu_ext

# import dlinfer.framework.torch_npu_ext
Copy link

Copilot AI Dec 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Commented out import without explanation. The import of torch_npu_ext has been commented out. If this is intentional for this PR (e.g., MACA-specific changes that don't need NPU support), consider removing the line entirely or adding a comment explaining why it's disabled. If it's temporary, add a TODO comment.

Suggested change
# import dlinfer.framework.torch_npu_ext

Copilot uses AI. Check for mistakes.
@@ -1,6 +1,6 @@
# Copyright (c) 2024, DeepLink. All rights reserved.
import dlinfer.framework.transformers_ext
import dlinfer.framework.torch_npu_ext
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

目前main分支这一行应该不会影响maca,可以rebase main后试一下

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

经测试,在最新main分支上,不会影响maca

maca_ext_ops.moe_align_block_size(
topk_ids, num_experts, block_size, sorted_ids, expert_ids, num_tokens_post_pad
)
if USE_MCOPLIB_OPS:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no need this "if", USE_MCOPLIB_OPS is a MUST

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants