From 1c7787365259fc0e19d6dbf5f687ac8499541014 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E8=BE=B0=E7=BE=8A?= Date: Mon, 17 Nov 2025 15:41:59 +0800 Subject: [PATCH] =?UTF-8?q?update=EF=BC=9A=E6=96=B0=E5=A2=9E11=E7=AF=87?= =?UTF-8?q?=E6=96=87=E7=AB=A0=20-ASE=202024:=203=E7=AF=87=20-ASE=202025:?= =?UTF-8?q?=204=E7=AF=87=20-=E5=85=B6=E4=BB=96=E8=AE=BA=E6=96=87:=204?= =?UTF-8?q?=E7=AF=87?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- README.md | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/README.md b/README.md index dfe108d..b9291fa 100644 --- a/README.md +++ b/README.md @@ -557,6 +557,8 @@ These models are Transformer encoders, decoders, and encoder-decoders pretrained 12. **DivoT5**: "Directional Diffusion-Style Code Editing Pre-training" [2025-01] [[paper](https://arxiv.org/abs/2501.12079)] +13. **COBOL-Java Validation**: "Automated Validation of COBOL to Java Transformation" [2025-04] [ASE 2024] [[paper](https://arxiv.org/abs/2506.10999)] + #### UniLM 1. **CugLM** (MLM + NSP + CLM): "Multi-task Learning based Pre-trained Language Model for Code Completion" [2020-12] [ASE 2020] [[paper](https://arxiv.org/abs/2012.14631)] @@ -1139,6 +1141,16 @@ These models apply Instruction Fine-Tuning techniques to enhance the capacities 86. "A Comprehensive Empirical Evaluation of Agent Frameworks on Code-centric Software Engineering Tasks" [2025-10] [[paper](https://arxiv.org/abs/2511.00872)] +87. **Code Completion Context**: "Beyond More Context: How Granularity and Order Drive Code Completion Quality" [2025-10] [ASE 2025] [[paper](https://arxiv.org/abs/2510.06606)] + +88. **Code Chunking**: "Relative Positioning Based Code Chunking Method For Rich Context Retrieval In Repository Level Code Completion Task With Code Language Model" [2025-10] [ASE 2025 Workshop] [[paper](https://arxiv.org/abs/2510.08610)] + +89. **Ethical Profiling**: "Advancing Automated Ethical Profiling in SE: a Zero-Shot Evaluation of LLM Reasoning" [2025-10] [ASE 2025] [[paper](https://arxiv.org/abs/2510.00881)] + +90. **TRUSTVIS**: "TRUSTVIS: A Multi-Dimensional Trustworthiness Evaluation Framework for Large Language Models" [2025-10] [ASE 2025 Demo Track] [[paper](https://arxiv.org/abs/2510.13106)] + +91. **Contextualized Data-Wrangling**: "Contextualized Data-Wrangling Code Generation in Computational Notebooks" [2024-09] [ASE 2024] [[paper](https://arxiv.org/abs/2409.13551)] + ### 3.4 Interactive Coding - "Interactive Program Synthesis" [2017-03] [[paper](https://arxiv.org/abs/1703.03539)] @@ -1905,6 +1917,10 @@ For each task, the first column contains non-neural methods (e.g. n-gram, TF-IDF - "EffiReasonTrans: RL-Optimized Reasoning for Code Translation" [2025-10] [[paper](https://arxiv.org/abs/2510.18863)] +- "WaDec: Decompiling WebAssembly Using Large Language Model" [2024-06] [ASE 2024] [[paper](https://arxiv.org/abs/2406.11346)] + +- "Automated Validation of COBOL to Java Transformation" [2025-04] [ASE 2025] [[paper](https://arxiv.org/abs/2506.10999)] + ### Code Commenting and Summarization - "A Transformer-based Approach for Source Code Summarization" [2020-05] [ACL 2020] [[paper](https://arxiv.org/abs/2005.00653)] @@ -2345,8 +2361,12 @@ For each task, the first column contains non-neural methods (e.g. n-gram, TF-IDF - "RPG: A Repository Planning Graph for Unified and Scalable Codebase Generation" [2025-09] [[paper](https://arxiv.org/abs/2509.16198)] +- "Relative Positioning Based Code Chunking Method For Rich Context Retrieval In Repository Level Code Completion Task With Code Language Model" [2025-10] [ASE 2025] [[paper](https://arxiv.org/abs/2510.08610)] + - "On Pretraining for Project-Level Code Completion" [2025-10] [[paper](https://arxiv.org/abs/2510.13697)] +- "Challenge on Optimization of Context Collection for Code Completion" [2025-10] [ASE 2025] [[paper](https://arxiv.org/abs/2510.04349)] + ### Issue Resolution - "SWE-bench: Can Language Models Resolve Real-World GitHub Issues?" [2023-10] [ICLR 2024] [[paper](https://arxiv.org/abs/2310.06770)] @@ -4281,6 +4301,8 @@ For each task, the first column contains non-neural methods (e.g. n-gram, TF-IDF - "CodeAlignBench: Assessing Code Generation Models on Developer-Preferred Code Adjustments" [2025-10] [[paper](https://arxiv.org/abs/2510.27565)] +- "What Types of Code Review Comments Do Developers Most Frequently Resolve?" [2025-10] [ASE 2025] [[paper](https://arxiv.org/abs/2510.05450)] + ## 7. Human-LLM Interaction - "Expectation vs. Experience: Evaluating the Usability of Code Generation Tools Powered by Large Language Models" [2022-04] [CHI EA 2022] [[paper](https://dl.acm.org/doi/abs/10.1145/3491101.3519665)]