diff --git a/README.md b/README.md index 4dbd806..1d0c60a 100644 --- a/README.md +++ b/README.md @@ -384,6 +384,7 @@ In the era of LLMs, KD of LLMs plays the following crucial roles: #### Instruction Following | Title | Venue | Date | Code | Data| |:--------|:--------:|:--------:|:--------:|:--------:| +| [**Precision Shaking and DORPO: Conceptual Foundations of LLM Knowledge Distillation Methods**](https://github.com/OpZest/Papers/blob/main/White_papers/Precision_Shaking_and_DORPO.md) | GitHub | 2024-06 | | [**Synthetic Data (Almost) from Scratch: Generalized Instruction Tuning for Language Models**](https://arxiv.org/abs/2402.13064) | arXiv | 2024-02 | | [**Revisiting Knowledge Distillation for Autoregressive Language Models**](https://arxiv.org/pdf/2402.11890.pdf) | arXiv | 2024-02 | | [**Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning**](https://arxiv.org/abs/2402.10110) | arXiv | 2024-02 | [Github](https://github.com/tianyi-lab/Reflection_Tuning) | [Data](https://github.com/tianyi-lab/Reflection_Tuning)|