diff --git a/README.md b/README.md index 51aaf2829..ff193dfb1 100644 --- a/README.md +++ b/README.md @@ -10,6 +10,9 @@ This is achieved by leveraging Envoy's [External Processing] (ext-proc) to exten [Inference Gateway]:#concepts-and-definitions + +> ***NOTE*** : As we prep for our `v1` release, some of our docs may fall out of scope, we are working hard to get these up to date and they will be ready by the time we launch `v1`. Thanks! + ## New! Inference Gateway has partnered with vLLM to accelerate LLM serving optimizations with [llm-d](https://llm-d.ai/blog/llm-d-announce)! diff --git a/site-src/guides/index.md b/site-src/guides/index.md index dbb104c31..58bb58035 100644 --- a/site-src/guides/index.md +++ b/site-src/guides/index.md @@ -4,6 +4,11 @@ This project is still in an alpha state and breaking changes may occur in the future. +???+ warning + + + This page is out of date with the v1.0.0 release candidate. Updates under active development + This quickstart guide is intended for engineers familiar with k8s and model servers (vLLM in this instance). The goal of this guide is to get an Inference Gateway up and running! ## **Prerequisites** diff --git a/site-src/index.md b/site-src/index.md index 7a2a116bf..7539d01ca 100644 --- a/site-src/index.md +++ b/site-src/index.md @@ -1,5 +1,11 @@ # Introduction +???+ warning + + + Some portions of this site may be out of date with the v1.0.0 release candidate. + Updates under active development! + Gateway API Inference Extension is an official Kubernetes project that optimizes self-hosting Generative Models on Kubernetes. The overall resource model focuses on 2 new inference-focused