Skip to content

Commit 279a7a3

Browse files
authored
Update README.md
1 parent 42fc975 commit 279a7a3

File tree

1 file changed

+10
-9
lines changed

1 file changed

+10
-9
lines changed

README.md

Lines changed: 10 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -131,29 +131,30 @@ If the paper and code are helpful, please kindly cite our paper:
131131
```
132132
@inproceedings{bao-etal-2024-abstract,
133133
title = "{A}bstract {M}eaning {R}epresentation-Based Logic-Driven Data Augmentation for Logical Reasoning",
134-
author = "Bao, Qiming and
135-
Peng, Alex and
134+
author = {Bao, Qiming and
135+
Peng, Alex Yuxuan and
136136
Deng, Zhenyun and
137137
Zhong, Wanjun and
138-
Gendron, Gael and
138+
Gendron, Ga{\"e}l and
139139
Pistotti, Timothy and
140-
Tan, Neset and
140+
Tan, Ne{\c{s}}et and
141141
Young, Nathan and
142142
Chen, Yang and
143143
Zhu, Yonghua and
144144
Denny, Paul and
145145
Witbrock, Michael and
146-
Liu, Jiamou",
146+
Liu, Jiamou},
147147
editor = "Ku, Lun-Wei and
148148
Martins, Andre and
149149
Srikumar, Vivek",
150-
booktitle = "Findings of the Association for Computational Linguistics ACL 2024",
150+
booktitle = "Findings of the Association for Computational Linguistics: ACL 2024",
151151
month = aug,
152152
year = "2024",
153-
address = "Bangkok, Thailand and virtual meeting",
153+
address = "Bangkok, Thailand",
154154
publisher = "Association for Computational Linguistics",
155-
url = "https://aclanthology.org/2024.findings-acl.353",
155+
url = "https://aclanthology.org/2024.findings-acl.353/",
156+
doi = "10.18653/v1/2024.findings-acl.353",
156157
pages = "5914--5934",
157-
abstract = "Combining large language models with logical reasoning enhances their capacity to address problems in a robust and reliable manner. Nevertheless, the intricate nature of logical reasoning poses challenges when gathering reliable data from the web to build comprehensive training datasets, subsequently affecting performance on downstream tasks. To address this, we introduce a novel logic-driven data augmentation approach, AMR-LDA. AMR-LDA converts the original text into an Abstract Meaning Representation (AMR) graph, a structured semantic representation that encapsulates the logical structure of the sentence, upon which operations are performed to generate logically modified AMR graphs. The modified AMR graphs are subsequently converted back into text to create augmented data. Notably, our methodology is architecture-agnostic and enhances both generative large language models, such as GPT-3.5 and GPT-4, through prompt augmentation, and discriminative large language models through contrastive learning with logic-driven data augmentation. Empirical evidence underscores the efficacy of our proposed method with improvement in performance across seven downstream tasks, such as reading comprehension requiring logical reasoning, textual entailment, and natural language inference. Furthermore, our method leads on the ReClor leaderboard. The source code and data are publicly available",
158+
abstract = "Combining large language models with logical reasoning enhances their capacity to address problems in a robust and reliable manner. Nevertheless, the intricate nature of logical reasoning poses challenges when gathering reliable data from the web to build comprehensive training datasets, subsequently affecting performance on downstream tasks. To address this, we introduce a novel logic-driven data augmentation approach, AMR-LDA. AMR-LDA converts the original text into an Abstract Meaning Representation (AMR) graph, a structured semantic representation that encapsulates the logical structure of the sentence, upon which operations are performed to generate logically modified AMR graphs. The modified AMR graphs are subsequently converted back into text to create augmented data. Notably, our methodology is architecture-agnostic and enhances both generative large language models, such as GPT-3.5 and GPT-4, through prompt augmentation, and discriminative large language models through contrastive learning with logic-driven data augmentation. Empirical evidence underscores the efficacy of our proposed method with improvement in performance across seven downstream tasks, such as reading comprehension requiring logical reasoning, textual entailment, and natural language inference. Furthermore, our method leads on the ReClor leaderboard at https://eval.ai/web/challenges/challenge-page/503/leaderboard/1347. The source code and data are publicly available at https://github.com/Strong-AI-Lab/Logical-Equivalence-driven-AMR-Data-Augmentation-for-Representation-Learning."
158159
}
159160
```

0 commit comments

Comments
 (0)