Skip to content

Commit d7caa23

Browse files
committed
add new papers
1 parent 89a1463 commit d7caa23

File tree

9 files changed

+72
-0
lines changed

9 files changed

+72
-0
lines changed

.DS_Store

0 Bytes
Binary file not shown.

_includes/.DS_Store

0 Bytes
Binary file not shown.

_papers/.DS_Store

2 KB
Binary file not shown.

_papers/FSE-25a/IRepair.pdf

811 KB
Binary file not shown.

_papers/FSE-25a/index.md

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
---
2+
key: FSE-25a
3+
permalink: /papers/FSE-25a/
4+
short_name: FSE '25
5+
title: "IRepair: An Intent-Aware Approach to Repair Data-Driven Errors in Large Language Models"
6+
bib: |
7+
@inproceedings{imtiaz2025irepair,
8+
author = {Sayem Mohammad Imtiaz and Astha Singh and Fraol Batole and Hridesh Rajan},
9+
title = {IRepair: An Intent-Aware Approach to Repair Data-Driven Errors in Large Language Models},
10+
booktitle = {ESEC/FSE'2025: The 33st ACM Foundations of Software Engineering},
11+
location = {Trondheim, Norway},
12+
month = {June 23-June 27},
13+
year = {2025},
14+
entrysubtype = {conference},
15+
abstract = {
16+
Not a day goes by without hearing about the impressive feats of large language models (LLMs), and equally, not a day passes without hearing about their challenges. LLMs are notoriously vulnerable to biases in their dataset, leading to issues such as toxicity, harmful responses, and factual inaccuracies. While domain-adaptive training has been employed to mitigate these issues, these techniques often address all model parameters indiscriminately during the repair process, resulting in poor repair quality and reduced model versatility. In this paper, drawing inspiration from fault localization via program slicing, we introduce a novel dynamic slicing-based intent-aware LLM repair strategy, IRepair. This approach selectively targets the most error-prone sections of the model for repair. Specifically, we propose dynamically slicing the model’s most sensitive layers that require immediate attention, concentrating repair efforts on those areas. This method enables more effective repairs with potentially less impact on the model’s overall versatility by altering a smaller portion of the model. Furthermore, dynamic selection allows for a more nuanced and precise model repair compared to a fixed selection strategy. We evaluated our technique on three models from the GPT2 and GPT-Neo families, with parameters ranging from 800M to 1.6B, in a toxicity mitigation setup. Our results show that IRepair repairs errors 43.6% more effectively while causing 46% less disruption to general performance compared to the closest baseline, direct preference optimization. Our empirical analysis also reveals that errors are more concentrated in a smaller section of the model, with the top 20% of layers exhibiting 773% more error density than the remaining 80%. This highlights the need for selective repair. Additionally, we demonstrate that a dynamic selection approach is essential for addressing errors dispersed throughout the model, ensuring a robust and efficient repair.
17+
}
18+
}
19+
kind: conference
20+
download_link: IREPAIR.pdf
21+
publication_year: 2025
22+
tags:
23+
- d4
24+
---

_papers/ICSE-25b/LocalizeAgent.pdf

802 KB
Binary file not shown.

_papers/ICSE-25b/index.md

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
---
2+
key: ICSE-25b
3+
permalink: /papers/ICSE-25b/
4+
short_name: ICSE '25
5+
title: "An LLM-Based Agent-Oriented Approach for Automated Code Design Issue Localization"
6+
bib: |
7+
@inproceedings{batole2025localizeAgent,
8+
author = {Fraol Batole and David OBrien and Tien N. Nguyen and Robert Dyer and Hridesh Rajan},
9+
title = {An LLM-Based Agent-Oriented Approach for Automated Code Design Issue Localization},
10+
booktitle = {ICSE'2025: The 47th International Conference on Software Engineering},
11+
location = {Ottawa, Canada},
12+
month = {April 27-May 3},
13+
year = {2025},
14+
entrysubtype = {conference},
15+
abstract = {Maintaining software design quality is crucial for the long-term maintainability and evolution of systems. However, design issues such as poor modularity and excessive complexity often emerge as codebases grow. Developers rely on external tools, such as program analysis techniques, to identify such issues. This work leverages Large Language Models (LLMs) to develop an automated approach for analyzing and localizing design issues. Large language models have demonstrated significant performance on coding tasks, but directly leveraging them for design issue localization is challenging. Large codebases exceed typical LLM context windows, and program analysis tool outputs in non-textual modalities (e.g., graphs or interactive visualizations) are incompatible with LLMs’ natural language inputs. To address these challenges, we propose LOCALIZEAGENT, a novel multi-agent framework for effective design issue localization. LOCALIZEAGENT integrates the specialized agents that (1) analyze code to identify potential code design issues, (2) transform program analysis outputs into abstraction-aware LLM-friendly natural language summaries, (3) generate context-aware prompts tailored to specific refactoring types, and (4) leverage LLMs to locate and rank the localized issues based on their relevance. Our evaluation using diverse real-world codebases demonstrates significant improvements over the baseline approaches, with LOCALIZEAGENT achieving 138%, 166%, and 206% relative improvements in exact-match accuracy for localizing information hiding, complexity, and modularity issues, respectively.
16+
}
17+
}
18+
19+
kind: conference
20+
download_link: LocalizeAgent.pdf
21+
publication_year: 2025
22+
tags:
23+
- d4
24+
---
815 KB
Binary file not shown.

_papers/ICSE-25c/index.md

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
---
2+
key: ICSE-25c
3+
permalink: /papers/ICSE-25c/
4+
short_name: ICSE '25
5+
title: "Mock Deep Testing: Toward Separate Development of Data and Models for Deep Learning"
6+
bib: |
7+
@inproceedings{manke2025mockdeeptesting,
8+
author = {Ruchira Manke and Mohammad Wardat and Foutse Khomh and Hridesh Rajan},
9+
title = {Mock Deep Testing: Toward Separate Development of Data and Models for Deep Learning},
10+
booktitle = {ICSE'2025: The 47th International Conference on Software Engineering},
11+
location = {Ottawa, Canada},
12+
month = {April 27-May 3},
13+
year = {2025},
14+
entrysubtype = {conference},
15+
abstract = {While deep learning (DL) has permeated, and be- come an integral component of many critical software systems, today software engineering research hasn’t explored how to sep- arately test data and models that are integral for DL approaches to work effectively. The main challenge in independently testing these components arises from the tight dependency between data and models. This research explores this gap, introducing our methodology of mock deep testing for unit testing of DL appli- cations. To enable unit testing, we introduce a design paradigm that decomposes the workflow into distinct, manageable compo- nents, minimizes sequential dependencies, and modularizes key stages of the DL, including data preparation and model design. For unit testing these components, we propose modeling their dependencies using mocks. In the context of DL, mocks refer to mock data and mock model that mimic the behavior of the original data and model, respectively. This modular approach facilitates independent development and testing of the compo- nents, ensuring comprehensive quality assurance throughout the development process. We have developed KUnit, a framework for enabling mock deep testing for the Keras library, a popular library for developing DL applications. We empirically evaluated KUnit to determine the effectiveness of mocks in independently testing data and models. Our assessment of 50 DL programs obtained from Stack Overflow and GitHub shows that mocks effectively identified 10 issues in the data preparation stage and 53 issues in the model design stage. We also conducted a user study with 36 participants using KUnit to perceive the effectiveness of our approach. Participants using KUnit successfully resolved 25 issues in the data preparation stage and 38 issues in the model design stage. We also found that mock objects provide a lightweight emulation of the dependencies for unit testing, facilitating early bug detection. Lastly, to evaluate the usability of KUnit, we conducted a post-study survey. The results reveal that KUnit is helpful to DL application developers, enabling them to independently test each component (data and model) and resolve issues effectively in different stages.
16+
}
17+
}
18+
19+
kind: conference
20+
download_link: Deep_Mock_Testing.pdf
21+
publication_year: 2025
22+
tags:
23+
- d4
24+
---

0 commit comments

Comments
 (0)