From 552017908d917564f047595026272db607815488 Mon Sep 17 00:00:00 2001 From: Christopher Akiki Date: Wed, 18 Jun 2025 15:03:40 +0200 Subject: [PATCH] [MINOR:TYPO] Update README.md snippit -> snippet --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 93872f1..1e23baa 100644 --- a/README.md +++ b/README.md @@ -48,7 +48,7 @@ We added a PyTorch implementation of the sliding window attention that doesn't r **Limitations**: uses 2x more memory (but fp16 offsets that), and doesn’t support dilation and autoregressive attention (not needed for finetuning) -therefore, it is suitable for finetuning on downstream tasks but not a good choice for language modeling. The code snippit below and the TriviaQA scripts were updated to use this new implementation. +therefore, it is suitable for finetuning on downstream tasks but not a good choice for language modeling. The code snippet below and the TriviaQA scripts were updated to use this new implementation. **\*\*\*\*\* End new information \*\*\*\*\***