- Use
decode_icl_llama.pyto generate responses with Llama-2-vanilla models with in-context alignment. - Use other
decode_*.pyfiles to generate responses with baseline models. - Use
eval_outputs.pyfor automatic evaluations. - Example commands can be found in the headers of the files.
- For more details, please refer to our paper or email Han at xiaochuang.han@gmail.com :)
-
Notifications
You must be signed in to change notification settings - Fork 0
In-Context Alignment: Chat with Vanilla Language Models Before Fine-Tuning
xhan77/in-context-alignment
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
About
In-Context Alignment: Chat with Vanilla Language Models Before Fine-Tuning