Skip to content

Commit d19ce55

Browse files
authored
Update README.md
1 parent 0665835 commit d19ce55

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Logical-Equivalence-driven-AMR-Data-Augmentation-for-Representation-Learning
22

3-
We proposed a new AMR-based logic-driven data augmentation for contrastive learning intermediate training and then we conduct the downstream tasks require logical reasoning including logical reasoning reading comprehension tasks (ReClor and LogiQA) and natural language inference tasks (MNLI, MRPC, RTE, QNLI and QQP). Our `AMR-LDA` model (AMR-LDA Prompt Augmentation+GPT4) and `AMR-LDA (DeBERTa-v2-xxlarge-AMR-LDA-Cont)` lead the [ReClor leaderboard](https://eval.ai/web/challenges/challenge-page/503/leaderboard/1347) and we are the first group scored above 90% on the hidden test set around the world. Our [paper](https://arxiv.org/abs/2305.12599) has been accepted by the Findings of ACL-24.
3+
We proposed a new AMR-based logic-driven data augmentation for contrastive learning intermediate training and then we conduct the downstream tasks require logical reasoning including logical reasoning reading comprehension tasks (ReClor and LogiQA) and natural language inference tasks (MNLI, MRPC, RTE, QNLI and QQP). Our `AMR-LDA` model (AMR-LDA Prompt Augmentation+GPT4) and `AMR-LDA (DeBERTa-v2-xxlarge-AMR-LDA-Cont)` lead the [ReClor leaderboard](https://eval.ai/web/challenges/challenge-page/503/leaderboard/1347) and we are the first group scored above 90% on the hidden test set around the world. Our [paper](https://aclanthology.org/2024.findings-acl.353/) has been accepted by the Findings of ACL-24.
44
<!-- and we also release the model weights on `Huggingface/models`.-->
55

66
<img src="./reclor_amr_lda.PNG" width="800" />

0 commit comments

Comments
 (0)