|
41 | 41 |
|
42 | 42 |
|
43 | 43 | ## News
|
| 44 | +🔥🔥 [2023/09/27] [CodeFuse-QWen-15B](https://huggingface.co/codefuse-ai/CodeFuse-QWen-14B) has been released, achieving a pass@1 (greedy decoding) score of 48.8% on HumanEval, which gains 16% absolute improvement over the base model [Qwen-14b](https://huggingface.co/Qwen/Qwen-14B) |
| 45 | + |
44 | 46 | 🔥🔥 [2023/09/27] [CodeFuse-StarCoder-15B](https://huggingface.co/codefuse-ai/CodeFuse-StarCoder-15B) has been released, achieving a pass@1 (greedy decoding) score of 54.9% on HumanEval.
|
45 | 47 |
|
46 | 48 | 🔥🔥🔥 [2023/09/26]We are pleased to announce the release of the [4-bit quantized version of CodeFuse-CodeLlama-34B](https://huggingface.co/codefuse-ai/CodeFuse-CodeLlama-34B-4bits). Despite the quantization process, the model still achieves a remarkable 73.8% accuracy (greedy decoding) on the HumanEval pass@1 metric.
|
|
54 | 56 | |:----------------------------|:-----------------:|:-------:|
|
55 | 57 | | **CodeFuse-CodeLlama-34B** | **74.4%** | 2023/09 |
|
56 | 58 | |**CodeFuse-CodeLlama-34B-4bits** | **73.8%** | 2023/09 |
|
57 |
| -| **CodeFuse-StarCoder-15B** | **54.9%** | 2023/08 | |
58 | 59 | | WizardCoder-Python-34B-V1.0 | 73.2% | 2023/08 |
|
59 | 60 | | GPT-4(zero-shot) | 67.0% | 2023/03 |
|
60 | 61 | | PanGu-Coder2 15B | 61.6% | 2023/08 |
|
| 62 | +| **CodeFuse-StarCoder-15B** | **54.9%** | 2023/08 | |
61 | 63 | | CodeLlama-34b-Python | 53.7% | 2023/08 |
|
| 64 | +| **CodeFuse-QWen-14B** | **48.8%** | 2023/10 | |
62 | 65 | | CodeLlama-34b | 48.8% | 2023/08 |
|
63 | 66 | | GPT-3.5(zero-shot) | 48.1% | 2022/11 |
|
64 | 67 | | OctoCoder | 46.2% | 2023/08 |
|
65 | 68 | | StarCoder-15B | 33.6% | 2023/05 |
|
66 |
| -| LLaMA 2 70B(zero-shot) | 29.9% | 2023/07 | |
| 69 | +| QWen-14B | 32.3% | 2023/10 | |
67 | 70 |
|
68 | 71 |
|
69 | 72 | ## Articles
|
@@ -123,7 +126,8 @@ We are excited to release the following two CodeLLMs trained by MFTCoder, now av
|
123 | 126 | |--------------------------------------------------------------------------------------------|--------------------|-------------------------|------------|------------|
|
124 | 127 | | [🔥🔥🔥 CodeFuse-CodeLlama-34B](https://huggingface.co/codefuse-ai/CodeFuse-CodeLlama-34B) | CodeLlama-34b-Python | 600k | 80 | 4096 |
|
125 | 128 | | [🔥🔥🔥 CodeFuse-CodeLlama-34B-4bits](https://huggingface.co/codefuse-ai/CodeFuse-CodeLlama-34B-4bits) | CodeLlama-34b-Python| | | 4096 |
|
126 |
| -| [🔥🔥🔥 CodeFuse-StarCoder-15B](https://huggingface.co/codefuse-ai/CodeFuse-StarCoder-15B) | Starcoder | 600k | 80 | 4096 | |
| 129 | +| [🔥🔥🔥 CodeFuse-StarCoder-15B](https://huggingface.co/codefuse-ai/CodeFuse-StarCoder-15B) | Starcoder | 600k | 256 | 4096 | |
| 130 | +| [🔥🔥🔥 CodeFuse-QWen-14B](https://huggingface.co/codefuse-ai/CodeFuse-QWen-14B) | Qwen-14b | 1100k | 256 | 4096 | |
127 | 131 | | [🔥 CodeFuse-13B](https://huggingface.co/codefuse-ai/CodeFuse-13B) | CodeFuse-13B | 66k | 64 | 4096 |
|
128 | 132 |
|
129 | 133 |
|
|
0 commit comments