Skip to content

Commit 0e0bd14

Browse files
committed
adding warning labels to site while we update docs (kubernetes-sigs#1466)
1 parent 060c3fe commit 0e0bd14

File tree

3 files changed

+14
-0
lines changed

3 files changed

+14
-0
lines changed

README.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,9 @@ This is achieved by leveraging Envoy's [External Processing] (ext-proc) to exten
1010

1111
[Inference Gateway]:#concepts-and-definitions
1212

13+
14+
> ***NOTE*** : As we prep for our `v1` release, some of our docs may fall out of scope, we are working hard to get these up to date and they will be ready by the time we launch `v1`. Thanks!
15+
1316
## New!
1417
Inference Gateway has partnered with vLLM to accelerate LLM serving optimizations with [llm-d](https://llm-d.ai/blog/llm-d-announce)!
1518

site-src/guides/index.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,11 @@
44

55
This project is still in an alpha state and breaking changes may occur in the future.
66

7+
???+ warning
8+
9+
10+
This page is out of date with the v1.0.0 release candidate. Updates under active development
11+
712
This quickstart guide is intended for engineers familiar with k8s and model servers (vLLM in this instance). The goal of this guide is to get an Inference Gateway up and running!
813

914
## **Prerequisites**

site-src/index.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,11 @@
11
# Introduction
22

3+
???+ warning
4+
5+
6+
Some portions of this site may be out of date with the v1.0.0 release candidate.
7+
Updates under active development!
8+
39
Gateway API Inference Extension is an official Kubernetes project that optimizes self-hosting Generative Models on Kubernetes.
410

511
The overall resource model focuses on 2 new inference-focused

0 commit comments

Comments
 (0)