[feature] [sub feature 2] Dependency for qwen image edit run#151
[feature] [sub feature 2] Dependency for qwen image edit run#151SJTUyh wants to merge 4 commits intoAISBench:masterfrom
Conversation
Summary of ChangesHello @SJTUyh, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a new feature that enables the execution of Qwen image editing models within the benchmark system, complete with advanced LLM-based judging capabilities. It establishes a sophisticated distributed parallelism framework, optimized for NPU devices, to efficiently run and evaluate these complex models. The changes also include foundational components for handling judge-specific datasets and improving overall data I/O performance. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces new features and components for benchmarking, including a JudgeInfer worker that uses a language model for evaluation and new code for the qwenimage_edit model with distributed execution utilities. However, two significant security vulnerabilities were identified: an insecure deserialization flaw using the pickle module in the distributed communication logic, and a potential prompt injection vulnerability in the LLM-as-a-judge prompt template. Addressing these is crucial for the framework's security. Additionally, the review recommends implementing safer task grouping, improving file handling operations, clarifying prompts for better model performance, and removing debugging artifacts and non-English comments to enhance overall robustness and maintainability.
| object_tensor, src=self.ranks[src], group=self.cpu_group | ||
| ) | ||
|
|
||
| obj = pickle.loads(object_tensor.numpy().tobytes()) |
There was a problem hiding this comment.
The recv_object method uses pickle.loads to deserialize data received from other ranks in the distributed group. pickle is known to be insecure and can lead to arbitrary code execution if the input data is untrusted. In a distributed environment, if one node is compromised, an attacker could use this to gain control over other nodes in the cluster. It is recommended to use a safer serialization format such as JSON or to implement cryptographic signing and verification of the pickled data.
| os.remove(judge_org_prediction_path) | ||
| dump_jsonl(judge_preds, judge_org_prediction_path) |
There was a problem hiding this comment.
The current implementation of updating the prediction file by removing it and then writing to the same path is unsafe. If the dump_jsonl operation fails for any reason (e.g., disk full, permission error), the original prediction file will be lost. A safer pattern is to write the new content to a temporary file and then atomically rename it to the final destination.
temp_judge_org_prediction_path = judge_org_prediction_path + ".tmp"
dump_jsonl(judge_preds, temp_judge_org_prediction_path)
os.replace(temp_judge_org_prediction_path, judge_org_prediction_path)|
|
||
| <Original Question Begin>: \n{question}\n<Original Question End>\n\n | ||
| <Gold Target Begin>: \n{answer}\n<Gold Target End>\n\n | ||
| <Predicted Answer Begin>: \n{model_answer}\n<Predicted End>\n\n |
There was a problem hiding this comment.
This GRADER_TEMPLATE is susceptible to prompt injection due to the direct embedding of the untrusted {model_answer}. A malicious model could exploit this to manipulate evaluation results. It is critical to use clear delimiters and sanitize the model output to prevent such attacks. Furthermore, the template contains conflicting instructions, asking for "A" or "B" in multiple places but also "CORRECT, INCORRECT" on line 48. This inconsistency can confuse the language model and lead to unreliable evaluations. The instructions should be aligned to consistently expect "A" or "B" to match the post-processing logic.
| key = ( | ||
| task["models"][0]["abbr"] # same model | ||
| + "_" | ||
| + str(task['datasets'][0][0]['type']) # same dataset type | ||
| + "_" | ||
| + str(task["datasets"][0][0]["infer_cfg"]["inferencer"]) # same inferencer with the same args | ||
| ) |
There was a problem hiding this comment.
The current method of generating a key for grouping tasks by concatenating strings with _ is not robust. If any of the components (like task["models"][0]["abbr"]) contain an underscore, it could lead to incorrect grouping. Using a tuple as the key would be a safer and more reliable approach.
key = (
task["models"][0]["abbr"], # same model
str(task['datasets'][0][0]['type']), # same dataset type
str(task["datasets"][0][0]["infer_cfg"]["inferencer"]), # same inferencer with the same args
)| os.remove(final_org_results_path) | ||
|
|
||
| if os.path.exists(cur_results_path): | ||
| # 基于cur_results_path的文件复制一份final_org_results_path |
| # 加载被测模型的推理结果(排序后) | ||
| predictions: list = self._load_from_predictions(predictions_path) | ||
|
|
||
| # 为数据集添加 model_answer 列 |
There was a problem hiding this comment.
The comments on lines 126 and 129 are in Chinese. For consistency and to make the code accessible to a wider audience, please write comments in English.
| # 加载被测模型的推理结果(排序后) | |
| predictions: list = self._load_from_predictions(predictions_path) | |
| # 为数据集添加 model_answer 列 | |
| # Load the inference results of the model under test (sorted) | |
| predictions: list = self._load_from_predictions(predictions_path) | |
| # Add model_answer column to the dataset |
| def get_a_or_b(pred: str) -> str: | ||
| """从模型回复中提取A或B""" | ||
| match = re.search(r'[AB]', pred[-1:]) | ||
| return match.group(0) if match else 'B' |
There was a problem hiding this comment.
The implementation of get_a_or_b is not very robust as it only checks the last character of the prediction string. If the model's output has trailing whitespace or newlines (e.g., "A\n"), this function will fail to extract the intended character. A more robust approach would be to search for 'A' or 'B' at the beginning of the stripped string. Also, the docstring is in Chinese and should be translated to English.
| def get_a_or_b(pred: str) -> str: | |
| """从模型回复中提取A或B""" | |
| match = re.search(r'[AB]', pred[-1:]) | |
| return match.group(0) if match else 'B' | |
| def get_a_or_b(pred: str) -> str: | |
| """Extracts 'A' or 'B' from the model's response.""" | |
| if not isinstance(pred, str): | |
| return 'B' | |
| match = re.search(r"^[AB]", pred.strip()) | |
| return match.group(0) if match else 'B' |
| # 分支 1:scatter_idx=2 且 gather_idx=1(Ulysses 并行的 “拆分多头” 场景),按「多头维度(dim2)」拆分张量,同时将「序列维度(dim1)」重组为完整长度。 | ||
| if scatter_idx == 2 and gather_idx == 1: |
There was a problem hiding this comment.
This comment is in Chinese. To maintain consistency and readability for all contributors, please write comments in English.
| # 分支 1:scatter_idx=2 且 gather_idx=1(Ulysses 并行的 “拆分多头” 场景),按「多头维度(dim2)」拆分张量,同时将「序列维度(dim1)」重组为完整长度。 | |
| if scatter_idx == 2 and gather_idx == 1: | |
| # Branch 1: scatter_idx=2 and gather_idx=1 (Ulysses parallel "split heads" scenario), split tensor by head dimension (dim2), and reassemble sequence dimension (dim1) to full length. |
| # cpu_group = torch.distributed.new_group(ranks, backend="gloo") | ||
|
|
||
| # 修改后(使用HCCL后端) | ||
| cpu_group = torch.distributed.new_group(ranks, backend="hccl") # 适配昇腾环境 |
There was a problem hiding this comment.
The variable cpu_group is initialized with the hccl backend, which is intended for Huawei NPUs (Ascend), not CPUs. This is misleading. If this process group is indeed for CPU-based communication, the gloo backend is more appropriate. If it's for device communication, the variable should be renamed to reflect that. The comment # 适配昇腾环境 (adapt to Ascend environment) should also be in English.
| cpu_group = torch.distributed.new_group(ranks, backend="hccl") # 适配昇腾环境 | |
| cpu_group = torch.distributed.new_group(ranks, backend="hccl") # Adapt to Ascend environment |
| print("ljf 进入采样器,涉及随机") | ||
| x0 = sample - current_sigma * model_output | ||
| noise = torch.randn_like(sample) | ||
| prev_sample = (1.0 - next_sigma) * x0 + next_sigma * noise | ||
| else: | ||
| print("ljf 进入采样器,无随机") | ||
| prev_sample = sample + dt * model_output |
There was a problem hiding this comment.
These print statements with Chinese text appear to be for debugging. They should be removed from the final code to keep the output clean and professional.
| print("ljf 进入采样器,涉及随机") | |
| x0 = sample - current_sigma * model_output | |
| noise = torch.randn_like(sample) | |
| prev_sample = (1.0 - next_sigma) * x0 + next_sigma * noise | |
| else: | |
| print("ljf 进入采样器,无随机") | |
| prev_sample = sample + dt * model_output | |
| if self.config.stochastic_sampling: | |
| x0 = sample - current_sigma * model_output | |
| noise = torch.randn_like(sample) | |
| prev_sample = (1.0 - next_sigma) * x0 + next_sigma * noise | |
| else: | |
| prev_sample = sample + dt * model_output |
Thanks for your contribution; we appreciate it a lot. The following instructions will make your pull request healthier and help you get feedback more easily. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.
感谢您的贡献,我们非常重视。以下说明将使您的拉取请求更健康,更易于获得反馈。如果您不理解某些项目,请不要担心,只需提交拉取请求并从维护人员那里寻求帮助即可。
PR Type / PR类型
Related Issue | 关联 Issue
Fixes #(issue ID / issue 编号) / Relates to #(issue ID / issue 编号)
🔍 Motivation / 变更动机
Please describe the motivation of this PR and the goal you want to achieve through this PR.
请描述您的拉取请求的动机和您希望通过此拉取请求实现的目标。
📝 Modification / 修改内容
Please briefly describe what modification is made in this PR.
请简要描述此拉取请求中进行的修改。
📐 Associated Test Results / 关联测试结果
Please provide links to the related test results, such as CI pipelines, test reports, etc.
请提供相关测试结果的链接,例如 CI 管道、测试报告等。
Does the modification introduce changes that break the backward compatibility of the downstream repositories? If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.
是否引入了会破坏下游存储库向后兼容性的更改?如果是,请描述它如何破坏兼容性,以及下游项目应该如何修改其代码以保持与此 PR 的兼容性。
If the modification introduces performance degradation, please describe the impact of the performance degradation and the expected performance improvement.
如果引入了性能下降,请描述性能下降的影响和预期的性能改进。
🌟 Use cases (Optional) / 使用案例(可选)
If this PR introduces a new feature, it is better to list some use cases here and update the documentation.
如果此拉取请求引入了新功能,最好在此处列出一些用例并更新文档。
✅ Checklist / 检查列表
Before PR:
After PR:
👥 Collaboration Info / 协作信息
🌟 Useful CI Command / 实用的CI命令
/gemini review/gemini summary/gemini help/readthedocs build