Skip to content

Commit 72a2592

Browse files
authored
Fix (#345)
1 parent 3704601 commit 72a2592

File tree

3 files changed

+10
-13
lines changed

3 files changed

+10
-13
lines changed

source/_data/SymbioticLab.bib

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1959,14 +1959,14 @@ @Article{mercury:arxiv24
19591959
}
19601960
}
19611961
1962-
@InProceedings{autoiac:neurips24,
1962+
@InProceedings{iac-eval:neuripsdb24,
19631963
author = {Patrick TJ Kon and Jiachen Liu and Yiming Qiu and Weijun Fan and Ting He and Lei Lin and Haoran Zhang and Owen M. Park and George Sajan Elengikal and Yuxin Kang and Ang Chen and Mosharaf Chowdhury and Myungjin Lee and Xinyu Wang},
19641964
title = {{IaC-Eval}: A code generation benchmark for Infrastructure-as-Code programs},
19651965
year = {2024},
1966+
booktitle = {NeurIPS D\&B},
19661967
publist_topic = {Systems + AI},
1967-
publist_confkey = {NeurIPS'24},
1968-
booktitle = {NeurIPS},
1969-
publist_link = {paper || autoiac-neurips24.pdf},
1968+
publist_confkey = {NeurIPS'24 D&B},
1969+
publist_link = {paper || iac-eval-neuripsdb24.pdf},
19701970
publist_link = {code || https://github.com/autoiac-project/iac-eval},
19711971
publist_abstract = {
19721972
Infrastructure-as-Code (IaC), an important component of cloud computing, allows the definition of cloud infrastructure in high-level programs. However, developing IaC programs is challenging, complicated by factors that include the burgeoning complexity of the cloud ecosystem (e.g., diversity of cloud services and workloads), and the relative scarcity of IaC-specific code examples and public repositories. While large language models (LLMs) have shown promise in general code generation and could potentially aid in IaC development, no benchmarks currently exist for evaluating their ability to generate IaC code. We present IaC-Eval, a first step in this research direction. IaC-Eval's dataset includes 458 human-curated scenarios covering a wide range of popular AWS services, at varying difficulty levels. Each scenario mainly comprises a natural language IaC problem description and an infrastructure intent specification. The former is fed as user input to the LLM, while the latter is a general notion used to verify if the generated IaC program conforms to the user's intent; by making explicit the problem's requirements that can encompass various cloud services, resources and internal infrastructure details. Our in-depth evaluation shows that contemporary LLMs perform poorly on IaC-Eval, with the top-performing model, GPT-4, obtaining a pass@1 accuracy of 19.36%. In contrast, it scores 86.6% on EvalPlus, a popular Python code generation benchmark, highlighting a need for advancements in this domain. We open-source the IaC-Eval dataset and evaluation framework at https://github.com/autoiac-project/iac-eval to enable future research on LLM-based IaC code generation.}

source/publications/index.md

Lines changed: 6 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -438,19 +438,16 @@ venues:
438438
name: ICLR 23 Workshop on Tackling Climate Change with Machine Learning
439439
date: 2023-05-04
440440
url: https://www.climatechange.ai/events/iclr2023
441-
NeurIPS:
442-
category: Conferences
443-
occurrences:
444-
- key: NeurIPS'24
445-
name: The Thirty-eight Conference on Neural Information Processing Systems
446-
date: 2024-12-09
447-
url: https://neurips.cc/Conferences/2024
448-
acceptance: 25.8%
449441
'NeurIPS D&B':
450442
category: Conferences
451443
occurrences:
444+
- key: NeurIPS'24 D&B
445+
name: The 38th Conference on Neural Information Processing Systems Datasets & Benchmarks Track
446+
date: 2024-12-10
447+
url: https://neurips.cc/Conferences/2024
448+
acceptance: 25.3%
452449
- key: NeurIPS'25 D&B
453-
name: The Thirty-ninth Conference on Neural Information Processing Systems Track on Datasets and Benchmarks
450+
name: The 39th Conference on Neural Information Processing Systems Datasets & Benchmarks Track
454451
date: 2025-12-02
455452
url: https://neurips.cc/Conferences/2025
456453
acceptance: 24.91%

0 commit comments

Comments
 (0)