You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: codellama.md
+8Lines changed: 8 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,6 +27,7 @@ Today, we’re excited to release:
27
27
- Transformers integration
28
28
- Integration with Text Generation Inference for fast and efficient production-ready inference
29
29
- Integration with Inference Endpoints
30
+
- Integration with VS Code extension
30
31
- Code benchmarks
31
32
32
33
Code LLMs are an exciting development for software engineers because they can boost productivity through code completion in IDEs, take care of repetitive or annoying tasks like writing docstrings, or create unit tests.
@@ -45,6 +46,7 @@ Code LLMs are an exciting development for software engineers because they can bo
-[Using text-generation-inference and Inference Endpoints](#using-text-generation-inference-and-inference-endpoints)
49
+
-[Using VS Code extension](#using-vs-code-extension)
48
50
-[Evaluation](#evaluation)
49
51
-[Additional Resources](#additional-resources)
50
52
@@ -317,6 +319,12 @@ You can try out Text Generation Inference on your own infrastructure, or you can
317
319
318
320
You can learn more on how to [Deploy LLMs with Hugging Face Inference Endpoints in our blog](https://huggingface.co/blog/inference-endpoints-llm). The [blog](https://huggingface.co/blog/inference-endpoints-llm) includes information about supported hyperparameters and how to stream your response using Python and Javascript.
319
321
322
+
### Using VS Code extension
323
+
324
+
[HF Code Autocomplete](https://marketplace.visualstudio.com/items?itemName=HuggingFace.huggingface-vscode) is a VS Code extension for testing open source code completion models. The extension was developed as part of [StarCoder project](/blog/starcoder#tools--demos) and was updated to support the medium-sized base model, [Code Llama 13B](/codellama/CodeLlama-13b-hf). Find more [here](https://github.com/huggingface/huggingface-vscode#code-llama) on how to install and run the extension with Code Llama.
Language models for code are typically benchmarked on datatsets such as HumanEval. It consists of programming challenges where the model is presented with a function signature and a docstring and is tasked to complete the function body. The proposed solution is then verified by running a set of predefined unit tests. Finally, a pass rate is reported which describes how many solutions passed all tests. The pass@1 rate describes how often the model generates a passing solution when having one shot whereas pass@10 describes how often at least one solution passes out of 10 proposed candidates.
0 commit comments