Skip to content
This repository was archived by the owner on May 20, 2025. It is now read-only.

Commit 730e6d7

Browse files
Apply suggestions from code review
Co-authored-by: Ryan Cartwright <[email protected]>
1 parent d21e3e8 commit 730e6d7

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

docs/guides/python/serverless-llama.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ nitric new translator py-starter
3838
cd translator
3939
```
4040

41-
Next, let's install our base dependencies, then add the extra dependencies we need specifically loading our language model.
41+
Next, let's install our base dependencies, then add the extra dependencies we need specifically for loading our language model.
4242

4343
```bash
4444
# Install the base dependencies
@@ -48,7 +48,7 @@ uv add llama-cpp-python
4848

4949
You will also need to [download the Llama model](https://huggingface.co/bartowski/Llama-3.2-1B-Instruct-GGUF/tree/main) file and ensure it is located in the `./models/` directory with the correct model file name.
5050

51-
In this guide we'll be using 'Llama-3.2-1B-Instruct-Q4_K_M.gguf, this model is ideal for serverless because its reduced size and efficient 4-bit quantization make it cost-effective and scalable, running smoothly within the resource limits of serverless compute environments while maintaining solid performance.
51+
In this guide we'll be using `Llama-3.2-1B-Instruct-Q4_K_M.gguf`, this model is ideal for serverless because its reduced size and efficient 4-bit quantization make it cost-effective and scalable, running smoothly within the resource limits of serverless compute environments while maintaining solid performance.
5252

5353
Your folder structure should look like this:
5454

0 commit comments

Comments
 (0)