Skip to content

Commit eecbd5e

Browse files
author
Haojin Yang
committed
Updated some links in readme and docs.
1 parent 06fe896 commit eecbd5e

File tree

2 files changed

+6
-6
lines changed

2 files changed

+6
-6
lines changed

README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -15,8 +15,8 @@ Building on the foundational strengths of Bitorch Engine, the technology has bee
1515
push the boundaries of neural network training and inference.
1616
For instance:
1717

18-
- [green-bit-llm-trainer](https://github.com/GreenBitAI/green-bit-llm/tree/main/sft): In this project, BIE represents a significant leap in the field of Large Language Model (LLM) fine-tuning. Unlike traditional approaches that either quantize a fully trained model or introduce a few additional trainable parameters for [LoRA](https://github.com/microsoft/LoRA) style fine-tuning, this project innovates by directly fine-tuning the quantized parameters of LLMs. This paradigm shift allows for the full-scale quantization fine-tuning of LLMs, ensuring that the training process tightly integrates with the quantization schema from the outset.
19-
- [green-bit-llm-inference](https://github.com/GreenBitAI/green-bit-llm/tree/main/inference) also showcase the BIE's adeptness at supporting inference for models quantized from 4 to 2-bits without any significant loss in accuracy compared to the original 32 or 16-bits models. It stands as a testament to BIE's capability to maintain the delicate balance between model size, computational efficiency, and accuracy, addressing one of the key challenges in deploying sophisticated neural networks in resource-constrained environments.
18+
- [green-bit-llm-trainer](https://github.com/GreenBitAI/green-bit-llm/tree/main/green_bit_llm/sft): In this project, BIE represents a significant leap in the field of Large Language Model (LLM) fine-tuning. Unlike traditional approaches that either quantize a fully trained model or introduce a few additional trainable parameters for [LoRA](https://github.com/microsoft/LoRA) style fine-tuning, this project innovates by directly fine-tuning the quantized parameters of LLMs. This paradigm shift allows for the full-scale quantization fine-tuning of LLMs, ensuring that the training process tightly integrates with the quantization schema from the outset.
19+
- [green-bit-llm-inference](https://github.com/GreenBitAI/green-bit-llm/tree/main/green_bit_llm/inference) also showcase the BIE's adeptness at supporting inference for models quantized from 4 to 2-bits without any significant loss in accuracy compared to the original 32 or 16-bits models. It stands as a testament to BIE's capability to maintain the delicate balance between model size, computational efficiency, and accuracy, addressing one of the key challenges in deploying sophisticated neural networks in resource-constrained environments.
2020

2121
These projects exemplify the practical applications of Bitorch Engine and underscore its flexibility and efficiency for modern AI research and development.
2222
However, keep in mind that BIE is still in an early beta stage, see our roadmap below.
@@ -219,8 +219,8 @@ Check out the [Documentation](https://greenbitai.github.io/bitorch-engine) for A
219219
## Examples
220220

221221
- Basic example scripts can be found directly in [examples](examples).
222-
- [green-bit-llm-trainer](https://github.com/GreenBitAI/green-bit-llm/tree/main/sft) showcases the fine-tuning training of LLMs with quantized parameters.
223-
- [green-bit-llm-inference](https://github.com/GreenBitAI/green-bit-llm/tree/main/inference) showcases the BIE's adeptness at supporting fast inference for 4 to 2-bits LLMs.
222+
- [green-bit-llm-trainer](https://github.com/GreenBitAI/green-bit-llm/tree/main/green_bit_llm/sft) showcases the fine-tuning training of LLMs with quantized parameters.
223+
- [green-bit-llm-inference](https://github.com/GreenBitAI/green-bit-llm/tree/main/green_bit_llm/inference) showcases the BIE's adeptness at supporting fast inference for 4 to 2-bits LLMs.
224224

225225
## Contributors
226226

docs/source/index.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,8 +6,8 @@ Welcome to the documentation of Bitorch Engine (BIE): a cutting-edge computation
66

77
Building on the foundational strengths of Bitorch Engine, the technology has been employed in pioneering projects that push the boundaries of neural network training and inference. For instance,
88

9-
- `green-bit-llm-trainer <https://github.com/GreenBitAI/green-bit-llm/tree/main/sft>`_: In this project, BIE represents a significant leap in the field of Large Language Model (LLM) fine-tuning. Unlike traditional approaches that either quantize a fully trained model or introduce a few additional trainable parameters for `LoRA <https://github.com/microsoft/LoRA>`_ style fine-tuning, this project innovates by directly fine-tuning the quantized parameters of LLMs. This paradigm shift allows for the full-scale quantization fine-tuning of LLMs, ensuring that the training process tightly integrates with the quantization schema from the outset.
10-
- `green-bit-llm-inference <https://github.com/GreenBitAI/green-bit-llm/tree/main/inference>`_ also showcases the BIE's adeptness at supporting inference for models quantized from 4 to 2-bits without any significant loss in accuracy compared to the original 32 or 16-bits models. It stands as a testament to BIE's capability to maintain the delicate balance between model size, computational efficiency, and accuracy, addressing one of the key challenges in deploying sophisticated neural networks in resource-constrained environments.
9+
- `green-bit-llm-trainer <https://github.com/GreenBitAI/green-bit-llm/tree/main/green_bit_llm/sft>`_: In this project, BIE represents a significant leap in the field of Large Language Model (LLM) fine-tuning. Unlike traditional approaches that either quantize a fully trained model or introduce a few additional trainable parameters for `LoRA <https://github.com/microsoft/LoRA>`_ style fine-tuning, this project innovates by directly fine-tuning the quantized parameters of LLMs. This paradigm shift allows for the full-scale quantization fine-tuning of LLMs, ensuring that the training process tightly integrates with the quantization schema from the outset.
10+
- `green-bit-llm-inference <https://github.com/GreenBitAI/green-bit-llm/tree/main/green_bit_llm/inference>`_ also showcases the BIE's adeptness at supporting inference for models quantized from 4 to 2-bits without any significant loss in accuracy compared to the original 32 or 16-bits models. It stands as a testament to BIE's capability to maintain the delicate balance between model size, computational efficiency, and accuracy, addressing one of the key challenges in deploying sophisticated neural networks in resource-constrained environments.
1111

1212
All changes are tracked in the `changelog <https://github.com/GreenBitAI/bitorch-engine/blob/main/CHANGELOG.md>`_.
1313

0 commit comments

Comments
 (0)