We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent 1640c9f commit 8749f13Copy full SHA for 8749f13
_posts/2025-08-05-gpt-oss-support.md
@@ -56,8 +56,8 @@ vLLM requires nightly built PyTorch to serve GPT models. To ensure compatibility
56
Install LMCache from source (this command may take a few minutes due to CUDA kernel compilations):
57
58
```bash
59
-git clone https://github.com/LMCache/lmcache.github.io.git
60
-cd lmcache
+git clone https://github.com/LMCache/LMCache.git
+cd LMCache
61
62
# In your virtual environment
63
ENABLE_CXX11_ABI=1 uv pip install -e . --no-build-isolation
0 commit comments