You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,8 +15,8 @@ Building on the foundational strengths of Bitorch Engine, the technology has bee
15
15
push the boundaries of neural network training and inference.
16
16
For instance:
17
17
18
-
-[green-bit-llm-trainer](https://github.com/GreenBitAI/green-bit-llm/tree/main/sft): In this project, BIE represents a significant leap in the field of Large Language Model (LLM) fine-tuning. Unlike traditional approaches that either quantize a fully trained model or introduce a few additional trainable parameters for [LoRA](https://github.com/microsoft/LoRA) style fine-tuning, this project innovates by directly fine-tuning the quantized parameters of LLMs. This paradigm shift allows for the full-scale quantization fine-tuning of LLMs, ensuring that the training process tightly integrates with the quantization schema from the outset.
19
-
-[green-bit-llm-inference](https://github.com/GreenBitAI/green-bit-llm/tree/main/inference) also showcase the BIE's adeptness at supporting inference for models quantized from 4 to 2-bits without any significant loss in accuracy compared to the original 32 or 16-bits models. It stands as a testament to BIE's capability to maintain the delicate balance between model size, computational efficiency, and accuracy, addressing one of the key challenges in deploying sophisticated neural networks in resource-constrained environments.
18
+
-[green-bit-llm-trainer](https://github.com/GreenBitAI/green-bit-llm/tree/main/green_bit_llm/sft): In this project, BIE represents a significant leap in the field of Large Language Model (LLM) fine-tuning. Unlike traditional approaches that either quantize a fully trained model or introduce a few additional trainable parameters for [LoRA](https://github.com/microsoft/LoRA) style fine-tuning, this project innovates by directly fine-tuning the quantized parameters of LLMs. This paradigm shift allows for the full-scale quantization fine-tuning of LLMs, ensuring that the training process tightly integrates with the quantization schema from the outset.
19
+
-[green-bit-llm-inference](https://github.com/GreenBitAI/green-bit-llm/tree/main/green_bit_llm/inference) also showcase the BIE's adeptness at supporting inference for models quantized from 4 to 2-bits without any significant loss in accuracy compared to the original 32 or 16-bits models. It stands as a testament to BIE's capability to maintain the delicate balance between model size, computational efficiency, and accuracy, addressing one of the key challenges in deploying sophisticated neural networks in resource-constrained environments.
20
20
21
21
These projects exemplify the practical applications of Bitorch Engine and underscore its flexibility and efficiency for modern AI research and development.
22
22
However, keep in mind that BIE is still in an early beta stage, see our roadmap below.
@@ -219,8 +219,8 @@ Check out the [Documentation](https://greenbitai.github.io/bitorch-engine) for A
219
219
## Examples
220
220
221
221
- Basic example scripts can be found directly in [examples](examples).
222
-
-[green-bit-llm-trainer](https://github.com/GreenBitAI/green-bit-llm/tree/main/sft) showcases the fine-tuning training of LLMs with quantized parameters.
223
-
-[green-bit-llm-inference](https://github.com/GreenBitAI/green-bit-llm/tree/main/inference) showcases the BIE's adeptness at supporting fast inference for 4 to 2-bits LLMs.
222
+
-[green-bit-llm-trainer](https://github.com/GreenBitAI/green-bit-llm/tree/main/green_bit_llm/sft) showcases the fine-tuning training of LLMs with quantized parameters.
223
+
-[green-bit-llm-inference](https://github.com/GreenBitAI/green-bit-llm/tree/main/green_bit_llm/inference) showcases the BIE's adeptness at supporting fast inference for 4 to 2-bits LLMs.
Copy file name to clipboardExpand all lines: docs/source/index.rst
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,8 +6,8 @@ Welcome to the documentation of Bitorch Engine (BIE): a cutting-edge computation
6
6
7
7
Building on the foundational strengths of Bitorch Engine, the technology has been employed in pioneering projects that push the boundaries of neural network training and inference. For instance,
8
8
9
-
- `green-bit-llm-trainer <https://github.com/GreenBitAI/green-bit-llm/tree/main/sft>`_: In this project, BIE represents a significant leap in the field of Large Language Model (LLM) fine-tuning. Unlike traditional approaches that either quantize a fully trained model or introduce a few additional trainable parameters for `LoRA <https://github.com/microsoft/LoRA>`_ style fine-tuning, this project innovates by directly fine-tuning the quantized parameters of LLMs. This paradigm shift allows for the full-scale quantization fine-tuning of LLMs, ensuring that the training process tightly integrates with the quantization schema from the outset.
10
-
- `green-bit-llm-inference <https://github.com/GreenBitAI/green-bit-llm/tree/main/inference>`_ also showcases the BIE's adeptness at supporting inference for models quantized from 4 to 2-bits without any significant loss in accuracy compared to the original 32 or 16-bits models. It stands as a testament to BIE's capability to maintain the delicate balance between model size, computational efficiency, and accuracy, addressing one of the key challenges in deploying sophisticated neural networks in resource-constrained environments.
9
+
- `green-bit-llm-trainer <https://github.com/GreenBitAI/green-bit-llm/tree/main/green_bit_llm/sft>`_: In this project, BIE represents a significant leap in the field of Large Language Model (LLM) fine-tuning. Unlike traditional approaches that either quantize a fully trained model or introduce a few additional trainable parameters for `LoRA <https://github.com/microsoft/LoRA>`_ style fine-tuning, this project innovates by directly fine-tuning the quantized parameters of LLMs. This paradigm shift allows for the full-scale quantization fine-tuning of LLMs, ensuring that the training process tightly integrates with the quantization schema from the outset.
10
+
- `green-bit-llm-inference <https://github.com/GreenBitAI/green-bit-llm/tree/main/green_bit_llm/inference>`_ also showcases the BIE's adeptness at supporting inference for models quantized from 4 to 2-bits without any significant loss in accuracy compared to the original 32 or 16-bits models. It stands as a testament to BIE's capability to maintain the delicate balance between model size, computational efficiency, and accuracy, addressing one of the key challenges in deploying sophisticated neural networks in resource-constrained environments.
11
11
12
12
All changes are tracked in the `changelog <https://github.com/GreenBitAI/bitorch-engine/blob/main/CHANGELOG.md>`_.
0 commit comments