Skip to content

Commit d1b267b

Browse files
Updates
1 parent dc6c7c4 commit d1b267b

File tree

1 file changed

+1
-1
lines changed
  • content/learning-paths/servers-and-cloud-computing/arcee-foundation-model-on-gcp

1 file changed

+1
-1
lines changed

content/learning-paths/servers-and-cloud-computing/arcee-foundation-model-on-gcp/00_overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ This hands-on guide helps developers build cost-efficient, high-performance LLM
2020
- **Set up your environment**: install build tools and dependencies (CMake, Python, Git)
2121
- **Build the inference engine**: clone the [Llama.cpp](https://github.com/ggerganov/llama.cpp) repository and compile the project for your Arm-based environment
2222
- **Prepare the model**: download the AFM-4.5B model files from Hugging Face and use Llama.cpp’s quantization tools to reduce model size and optimize performance
23-
- **Run inference**: load the quantized model and run sample prompts using Llama.cpp
23+
- **Run inference**: load the quantized model and run sample prompts using Llama.cpp
2424
- **Evaluate model quality**: calculate perplexity or use other metrics to assess performance
2525

2626
{{< notice Note >}}

0 commit comments

Comments
 (0)