Skip to content

Commit eaa0f8c

Browse files
authored
Update docs/guides/mlp_tutorials/llm-inference.md
1 parent c8fdc4f commit eaa0f8c

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

docs/guides/mlp_tutorials/llm-inference.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -137,7 +137,7 @@ At this point, you can exit the SLURM allocation again by typing `exit`. If you
137137

138138
Cool, now you have a working container with PyTorch and all the necessary Python packages installed! Let's move on to Gemma-7B. We write a Python script `$SCRATCH/gemma-inference/gemma-inference.py` to load the model and prompt it with some custom text. The Python script should look like this:
139139

140-
```
140+
```python title="$SCRATCH/gemma-inference/gemma-inference.py"
141141
from transformers import AutoTokenizer, AutoModelForCausalLM
142142
import torch
143143

0 commit comments

Comments
 (0)