Skip to content

Commit 4b3bc6d

Browse files
committed
Move image to assets
Signed-off-by: Yuan Tang <[email protected]>
1 parent 5ea205d commit 4b3bc6d

File tree

2 files changed

+4
-2
lines changed

2 files changed

+4
-2
lines changed

_posts/2025-01-27-intro-to-llama-stack-with-vllm.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,14 +2,16 @@
22
layout: post
33
title: "Introducing vLLM Inference Provider in Llama Stack"
44
author: "Yuan Tang (Red Hat) and Ashwin Bharambe (Meta)"
5-
image: /assets/logos/vllm-logo-only-light.png
5+
image: /assets/figures/llama-stack/llama-stack.png
66
---
77

88
We are excited to announce that vLLM inference provider is now available in [Llama Stack](https://github.com/meta-llama/llama-stack) through the collaboration between the Red Hat AI Engineering team and the Llama Stack team from Meta. This article provides an introduction to this integration and a tutorial to help you get started using it locally or deploying it in a Kubernetes cluster.
99

1010
# What is Llama Stack?
1111

12-
<img align="right" src="https://llama-stack.readthedocs.io/en/latest/_images/llama-stack.png" alt="llama-stack-diagram" width="50%" height="50%">
12+
<div align="center">
13+
<img src="/assets/figures/llama-stack/llama-stack.png" alt="Icon" style="width: 60%; vertical-align:middle;">
14+
</div>
1315

1416
Llama Stack defines and standardizes the set of core building blocks needed to bring generative AI applications to market. These building blocks are presented in the form of interoperable APIs with a broad set of Service Providers providing their implementations.
1517

118 KB
Loading

0 commit comments

Comments
 (0)